In the first part of this series, we examined how to think about generative AI and China, including regulatory issues. In this installment, we will examine how leading generative AI companies in China view the sector and its challenges. The following commentary is primarily based on extensive discussions with the major players in China over the past several months.
Chinese companies are facing different challenges than their counterparts in the United States and other developed countries like Canada, South Korea, and Saudi Arabia in developing, training, and deploying generative AI applications based on foundation models. This is because AI in general, and generative AI in particular, has now become one of the focal points of U.S.-China tech competition because of fears about the potential for AI to become a game changer for China’s military modernization. Even if overblown, this has led to U.S. policies that are attempting to slow China’s domestic AI capability (see here and here for details).
Domestically, Chinese companies are facing a particularly complex regulatory landscape, characterized by rapidly evolving rules and standards for the development and deployment of generative AI products. China’s internet regulators are well ahead of their counterparts in the U.S. in pursuing “vertical” regulatory approaches to AI in general and generative AI in particular, issuing interim regulations on generative AI in July. Many details about implementation of the new regulations remain unclear, but they were significantly lightened from an initial draft in April, as Matt Sheehan discusses here. China’s information security standards body TC-260 is also getting in on the action, drawing on the interim regulations to draft standards around generative AI, including instituting implicit and explicit watermarks to identify AI-generated content (AIGC).
Other standards work is being led by the Chinese Electronics Standards Institution (CESI). In July, as part of work under the National Artificial Intelligence Standardization General Group, the CESI announced the establishment of a large model special group headed by the Shanghai Artificial Intelligence Lab, which includes Baidu, Huawei, Alibaba Cloud, Qihoo 360, and China Mobile. In addition, an expert group organized by the Chinese Academy of Social Sciences (CASS) in August released a model AI law, which is likely to provide significant input to the drafting of a national AI law expected to be released in draft form later this year. The proposal lays down a risk-based approach to AI governance, arguing for the creation of a negative list for AI products/services, as well as a national AI office to regulate AI development and oversight.
These regulatory developments are noteworthy for several reasons. Foremost, they are among the world’s first to attempt to tackle some of the more technical components of AI governance. Watermarking, for instance, has become a new buzzword among regulators and industry insiders, even in the U.S. and the EU, as a key way to improve AI model transparency. However, no entity has been able to develop an effective watermarking tool that is interoperable among different platforms. China’s TC-260 standards are among the first to provide guidance on these issues, and could therefore provide a framework for what is possible when it comes to implicit watermarking of AI-generated content. Similarly, China’s algorithmic registry could provide a technical model for what is possible when it comes to algorithmic management. For Chinese industry players, however, these developments mean they are looking at a regulatory landscape that is technically complex and evolving constantly. It’s also one in which regulators are seeking to be prescriptive on AI issues, rather than reactive as in other parts of the world.
Regulation, yes, but slowdown in development and deployment, no
But even as the domestic regulatory train rolls forward, China’s AI industry leaders are not waiting for the government to take more measures or implement existing guidelines. Overall, Chinese AI players are plowing ahead, with a close eye on Beijing’s evolving regulatory approach. Leading Chinese developers of foundation models, large language models (LLMs), and multimodal models capable of handling text, images, and video are steadily moving ahead with careful deployment of models as a service (MaaS), targeting industry verticals via general-purpose chatbots like ChatGPT-4.
The lack of concern about domestic regulation is likely reflective of industry opinion about China’s approach to AI. Chinese technology companies are acutely aware of the importance that Beijing places on AI innovation and the role that domestic AI companies play in the broader tech ecosystem. Recent actions by regulators also signal that China’s internet watchdog is not concerned with holding companies back. The Cyberspace Administration of China (CAC) has already apparently given a dozen or so Chinese generative AI companies the green light to publicly launch their services even though the Interim Measures only came into force several weeks ago. The CAC’s quick turnaround is an important signal for China’s domestic AI players that for now regulation will not get in the way of innovation. It remains unclear how CAC signaled to the firms that they were free to offer access to LLM-based chat agents to the public, as there was no official CAC announcement, just likely a regulatory nod while CAC determines a more formal licensing process.
Another reason could be that Chinese AI companies play a role in shaping domestic regulation. Take the Interim Measures on Generative AI, for example. The initial draft law was much more onerous and technically complex than the final law. The CAC’s revisions reflect extensive feedback from Chinese AI companies, and once again, an understanding that at present, at least, regulation will not hamper domestic innovation.
On a technical level, Chinese companies are closely tracking developments in the United States, particularly in Silicon Valley. Recently, a number of AI players have open-sourced their foundation models, much as several U.S. companies have done. In June, China’s state-backed Beijing Academy of AI (BAAI) released Aquila, an open-source LLM that can handle Chinese and English language inputs. This trend includes deploying open-source foundation models via cloud service providers, allowing leading Chinese cloud players like Alibaba and Baidu to offer advanced Western LLMs like Meta’s Llama-2 as part of their MaaS offerings. Huawei Cloud and Tencent Cloud could also do the same. This mirrors some of the trends in the United States, where foundation model developers are partnering with cloud service providers, such as Microsoft and OpenAI, in part to help with the high computing costs required to train foundation models.
Despite these similarities, there are also notable differences between Zhongguancun, Beijing’s tech center, and Silicon Valley. One such point of divergence is in the discussion over artificial general intelligence (AGI). Traditionally, the potential downsides of AGI have been central to debates around regulatory issues in the EU, the U.S., and elsewhere. But these conversations are not well developed within China’s AI community, though we are beginning to see debates about potential AGI harms. At the 2023 Zhongguancun Forum, for example, Baidu CEO Robin Li (李彦宏 Lǐ Yànhóng) touched on the potential harm to humanity, including the loss of control of advanced AI systems. Recently, Chinese ambassador to the United Nations Zhāng Jūn 张军 underscored the need for AI to be safe and controllable. He called for “establishing risk warning and response mechanisms, ensuring human control, and strengthening testing and evaluation of AI such that mankind has the ability to press the stop button at critical moments.” These conversations, however, remain few and far between, given, for example, the lack of major nongovernmental organizations pushing for more discussion of the potential dangers posed by an emergent AGI, and the current focus of companies on developing LLMs capable of generating revenue and focused on industry verticals. That said, the space for a more robust debate about issues like AGI is likely to grow in China as more people gain access to generative AI models and their use becomes more routine.
Another interesting difference between what is happening in the United States and China with respect to generative AI is the way both communities, including venture capitalists, investors, startups, and mature players are dealing with the huge shortage of critical hardware required for training foundation models. Both sides are contending with a shortage of advanced GPUs, the vast majority of which are manufactured for leading U.S. tech companies by foundry leader Taiwan Semiconductor Manufacturing Company (TSMC) in Taiwan. While TSMC is ramping up capacity to produce the most advanced GPUs optimized for machine learning and training LLMs, the company is constrained by things like capacity of advanced packaging to handle the demand. Recent industry rumors suggest that the U.S. and Japanese governments are trying to persuade TSMC to expand capacity for advanced packaging near fabrication plants the firm is building in the United States and Japan.
These hardware constraints are being exacerbated by U.S. export controls, instituted last October, which could soon further limit China’s access to hardware from GPU leader Nvidia—including chips the firm modified last year to come under performance thresholds—and make it harder for Chinese domestic alternatives like Biren Technologies to gain traction. But so far, with access to both the modified A800 and H800 chips from Nvidia, plus a growing number of indigenous advanced computing solutions, the impact of U.S. export controls appears to be minimal in the short to medium term. In April, Tencent assessed that its systems using the H800 could reduce training times for AI systems by more than half. Huawei has also used in-house processes to train its Pangu-α model, and other companies have followed suit. It stands to reason, however, that these AI systems are not on the cutting edge, and long-term restrictions on the most advanced chips could strike a blow to China’s AI developers.
Chinese AI developers are also not overly concerned about the Biden administration’s recent curbs on outbound investment in key tech sectors like semiconductors and AI. Chinese investors dominate the AI landscape in China, and where there may be a dearth of U.S. and other Western investment, we could see other big, global venture capital funds step in. At present, there are few if any financial constraints on investments in generative AI.
Generative AI sector expanding rapidly — lots of new entrants
Keeping track of developments and assessing China’s AI progress is a challenging task, as is comparing different foundation and business models and use cases to generate revenue, increase productivity, and stay ahead of regulators. Each leading Chinese player is taking a different approach to getting aboard the generative AI train. As noted in this podcast, Chinese LLM developers will continue to focus on developing applications in specific industry verticals, which are less subject to regulatory issues around content because they are not public-facing.
That said, there remains a considerable gap, as in the United States and other developed countries, between the promise of productivity and other gains from deploying LLMs in industry verticals and the actual revenue models. In the United States, OpenAI has licensed its foundation model to a host of companies developing industry vertical applications. Thus, the situation is largely similar in both countries in terms of the growing number of companies that are actually using LLM-based systems for real-world applications. Let’s take a closer look.
At a high level, one way to break down what is happening in China in this space is by company and industry segment:
Large cloud players: Baidu, Alibaba, Huawei
China’s largest cloud players, including Baidu, Alibaba, and Huawei, have significant experience in developing AI algorithms and applications that precede the generative AI frenzy. Each firm is also developing its own hardware/software stack for AI in general and generative AI and LLMs in particular. Each has unique sets of data that allow the firms to develop industry vertical–specific foundation models, and each is focusing on the B2B side of generative AI.
Baidu, for example, has developed multiple large models —which they sometimes call “big” models in their English language materials — for industry verticals. These include the ERNIE Big Model, Information Distribution Big Model, Transportation Big Model, and Energy Big Model, along with a model as a service (MaaS) platform. The company was one of the earliest players in the generative AI space, investing billions of dollars in natural language processing (NLP) research as early as 2017. Some estimates of Baidu’s ERNIE Bot 3.5 suggests that it outperforms GPT-4 on a number of metrics, though verifying these claims is difficult given the lack of access to Baidu’s API, the interface that allows computers to communicate. Huawei has developed the Pangu foundation model, versions of which it is deploying in industry verticals such as mining. Although most of China’s largest models rely on Nvidia chips, Huawei used its own Ascend 910 processors to train its Pangu-α model.
Alibaba, for its part, launched the company’s Tonyi Qianwen model in April, trained on both Chinese and English. It has since also released two open-source models, Qwen-7B and Qwen-7B-Chat, trained on 7 billion parameters. Last month, the company’s researchers also introduced Qwen-VL and Qwen-VL-Chat, large-scale vision-language models designed to perceive and understand both text and images. In early September, Alibaba got approval to release a public version of Tongyi Qianwen. But Alibaba’s primary focus remains licensing the LLM technology to enterprises and other organizations. In September the firm noted that Taobao, DingTalk, OPPO, and Zhejiang University have all begun using Tongyi Qianwen to train their own LLMs or develop applications based on the model. In a good example of an enterprise use, Taobao now testing a Tongyi Qianwen driven feature that would provide “more precise recommendations” to users conducting product searches–the new generative AI feature will debut on Double 11–November 11, also called “Single’s Day–China’s biggest online shopping day.
Social media companies: ByteDance, Tencent
ByteDance and Tencent had been more low-key with their generative AI plans than companies such as Huawei and Baidu. ByteDance, a social media company with considerable experience developing AI algorithms, and Tencent, with its flagship product WeChat, are looking to generative AI to add to the user experience on their respective platforms. Both companies have considered designing their own hardware and developing a software stack around this, but for now appear more dependent on Western sources of hardware than Baidu and Huawei. In early September Tencent released its Hunyuan LLM, which Vice President Jiang Jie said Hunyuan had become the “backbone of our operations.” The Huyuan LLM is the foundational model for some 50 Tencent applications. It will power Tencent Meeting, which will allow enterprise users to generate automated meeting notes, and will be behind applications for customer service, image generation, and copywriting. Bytedance was one of the companies CAC approved for release of a consumer facing LLM, the firm’s Doubao AI chatbot.
Both are part of a large group of companies that have put in big orders for A800 GPUs from Nvidia in the $5 billion range that are likely to be delivered this year and next. ByteDance is testing its own chatbot powered by LLMs under a project code-named Grace, but in June company officials claimed that this was only for internal testing and was at an early stage of development. ByteDance’s efforts are likely to focus on multimodal platforms, given the vast amount of video data the firm has access to for training. Engineers at the ByteDance Research Group, for example, published an article in June on building multimodal LLMs using a 200-terabyte dataset of video clips. In addition, demonstrating how closely intertwined the Chinese AI ecosystem is with the broader global space, ByteDance and other Chinese AI companies are using a number of different tools, including from Western companies, to develop LLMs and applications. ByteDance, for example, is using tools from Anyscale, a software company that provides a platform for developing and deploying large-scale AI applications.
Technology companies: SenseTime, iFlytek, Horizon Robotics, CloudWalk, Yitu, Megvii
China’s large perception AI companies — all of which are currently in the Commerce Department Entity List — are all developing both hardware and software solutions, in most cases working with cloud service companies, including Huawei. In August, iFlytek and Huawei released a software model, Spark Model 2.0, with a hardware LLM cluster for developers that includes Huawei’s Ascend GPU and Kunpeng CPU, a high-speed interlink, and Huawei distributed storage. The system is similar to GPU-based clusters sold by Nvidia such as the A100 DGX. In a comparison of a subset of major LLMs in China in August, the Xinhua Research Institute touted the Spark Model as the best in comparison to models from other firms, including Baidu’s ERNIE Bot, and models from SenseTime, Qihoo 360, and Alibaba, among others.
Research institutes, universities: BAAI, Tsinghua University, Peking University, Renmin University, Zhejiang Lab, Peng Cheng Lab
There is a lot of development going on outside large and second-tier private sector companies. China’s state-sponsored lab, BAAI, is particularly advanced in developing LLMs, as evidenced by the release of the WuDao 1.0 and 2.0, CogView, and Aquila open-source models. The institute is particularly well placed to develop some of China’s most advanced models, given its position under the Ministry of Science and Technology and its ability to bring together scientists from other major institutions such as Tsinghua University, Peking University, and the Chinese Academy of Social Sciences. Most of BAAI’s large language models use a mix of Nvidia A100 and indigenous GPUs from Illuvatar, as well as the Tiangai accelerator card, which is designed to speed up processing. The institute claims that Aquila was trained using 100 billion tokens and 7 billion parameters at speeds comparable to an A100 cluster. Significantly, Illuvatar says that the Tiangai is compatible with CUDA — a widely used parallel computing platform and programming model developed by Nvidia for general computing on GPUs. Early assessment of BAAI model performance tends to be comparable to other Western models. For example, research suggests that the institute’s Wu Dao Wen Hui 1.0 model, which it developed alongside Tsinghua University and MIT, outperformed Google’s BERT and OpenAI’s GPT-3.5 on a wide range of tasks. BAAI has a suite of pretrained models, including language and multimodal models.
Other players: 01 Wanwu Technology / Project AI 2.0
Similar to efforts by Tesla CEO Elon Musk to drive development of LLMs leveraging his firm’s experience with advanced computing, Sinovation Ventures founder Kai-fu Lee (李开复 Lǐ Kāifù) in March announced Project AI 2.0, an attempt to leap ahead in the development of LLMs and applications. Lee, a longtime AI expert and venture capital leader, has been assembling capital and computing power, along with talent — including from Alibaba, Baidu, Microsoft, SAP, Google, and Cisco. His purpose is to develop multimodal NLP and distributed computing and infrastructure, according to a March post in his WeChat Moments feed. The project will focus on commercial applications of next-generation models. The firm behind this effort, 01 Wanwu Technology, has already done beta testing of its model and will expand to include testing at 30 billion and 70 billion parameters. According to the company, the goal of the firm is to allow international scientists and Chinese engineers to build cross-border technologies, create an industry-leading general language model, and then provide the ability to combine pictures, videos, and more via a multimodal model.
Looking forward: Lots of things to watch
With the number of LLM developers in China close to 100, major shortages of computing power potentially looming, domestic regulatory wheels grinding forward, and a broader global debate heating up around balancing support for innovation with regulation, China’s generative AI sector will be on the radar on many fronts going forward. Key things to watch incoude:
Regulatory guardrails, but lots of support from Beijing
The level to which Beijing supports generative AI will be critical to watch. Unlike other technologies, the content-heavy nature of generative AI means the Chinese authorities are in a difficult position in terms of developing tools to both support and control the technology. On the one hand, regulatory efforts to corral the content-generation side will continue as implementation of the Interim Measures moves forward and standards bodies grapple with how to actually implement broader rules around generative AI, such as watermarking. Another key area to watch will be how Chinese regulators handle foreign firms that use tools developed with Western LLMs for applications they are providing to Chinese customers. The scope of the new Interim Measures suggests that foreign companies operating in China using LLMs will be subject to the same requirements as domestic firms, particularly for public-facing services that involve content generation.
The rapid fire approval for companies to release public facing versions of their LLMs that came from CAC in early September is significant. The focus of LLM deployment for most firms will continue to be enterprise applications, and some firm, like Tencent, have not released a consumer facing version of its primary LLM, but offering public facing versions of some LLMs will make it possible to improve model performance, generate some excitement around the company and its models and generative AI prowess, and demonstrate to regulators that the firms are sensitive to content-related issues and have implemented guardrails. This process will be worth watching, and initial indications suggest that model developers have used reinforcement learning through human feedback to ensure that obviously censored topics, discussions of President Xi, and other sensitive topics get little or no response when queried.
With the release of public facing LLM-driven chatbots, and the licensing of LLMs to enterprises by major cloud players such as Alibaba, Baidu, Huawei, and Tencent in particular, the competition over the next six months will be intense, as these firms vie for clients to license their models and build applications based on them. Each cloud player has its own vertical strengths, and a natural client base among its cloud customers. Some shakeout will occur here as better models will emerge, but all the big cloud players have sufficient compute resources to continue refining and improving models for the foreseeable future.
At the same time, Beijing is moving quickly to build out large, national advanced computing projects which could play a major role in fueling the development of LLMs, generative AI applications, and the broader AI ecosystem. Via a smart city model, the National Unified Computing Power Network (NUCPN), for example, and the associated East Data West Compute (EDWC) project are beginning to be fleshed out and will provide national-level resources for companies developing foundation models and generative AI applications. This process is particularly important for western provinces that would normally lack access to advanced computing. Importantly, one estimate is that only 50% of the new AI-optimized GPUs going into Chinese data centers that are part of the NUCPN are from Nvidia. The other half are being supplied by domestic companies; all data centers are now required to support Nvidia and several versions of domestically supplied GPUs.
Participation in international discussions on generative AI regulatory issues
With the United States, EU, and U.K. all looking at how to regulate generative AI, China’s efforts in this space will be the subject of much debate in the coming months, particularly regarding the need to include Chinese representatives in broader discussions about regulation. The U.K. AI Summit in London in November, for example, will be a key venue for this broader discussion, and there is already robust debate within the U.K. government, as well as among summit participants, regarding what to do about China. Some in the U.K. government, notably Prime Minister Rishi Sunak, and even some China critics within his administration, support the inclusion of Chinese officials at the summit. But others, including in Japan and the United States, believe that Western countries should figure out how to regulate generative AI before engaging with China. Observers have recently argued, however, that excluding China, the world’s second-largest AI power is likely a bad idea. They argue that doing so could “threaten the security of all states.” For their part, both Chinese officials and China’s largest industry players have publicly announced a desire to engage with international partners and institutions on AI governance. Acting on this opportune moment to engage with China could therefore set the tone for future international cooperation on AI governance.
Chinese firms taking on the challenge of regulatory pressures and U.S. technology restrictions
Despite the clear obstacles Chinese firms face in pushing forward with development of better LLMs while seeking viable business models that generate revenue, regulatory issues and complications in accessing cutting-edge technology are unlikely to hinder this sector. Regulators in Beijing will of course continue to try and race ahead of generative AI development by drafting rules and regulations, and likely a broader AI law later this year, but companies are already reacting to what they anticipate will be some of the regulatory guardrails. The draft regulations on generative AI unsurprisingly included restrictions on the type of outputs that would run afoul of censors and the type of content that is normally restricted for both traditional and digital/social media. On the hardware and software-stack side, there are already many innovative solutions for leveraging existing access to advanced Western hardware. Additionally, major efforts are underway to develop alternatives via support for indigenous companies such as Biren and Moore Threads in the GPU space, and state-backed access to advanced computers via national projects such as NUCPN/EDWC. Further U.S. efforts to control access to advanced GPUs, if implemented, will be important but not decisive in slowing the advance of China’s generative AI sector. The major players have stockpiled A100s, A800s, and likely H800s and have made large orders of roughly $5 billion, primarily for A800s, which will be delivered in 2023 and likely into 2024. Baidu, ByteDance, Tencent, and Alibaba alone have put in orders of around $1 billion for 100,000 A800s.
Enterprise applications
Despite public release of some models, emphasis will remain on enterprise applications and determining how to generate revenue, and Chinese firms will make comparisons of their models with leading U.S. versions.
For example, in early September, Tencent released its Hunyuan LLM aimed at enterprise applications. The firm claims that the model is more capable than ChatGPT and Llama-2, and experiences 30% fewer hallucinations (responses at odds with the training data) compared to Llama 2. These types of claims are difficult to verify, but we are likely to see more of this kind of comparison as Chinese firms strive to take on the best U.S. models. With most of the big players focused on enterprise applications, the competition will be intense. Jiǎng Jié 蒋杰, Tencent’s vice president, said, “A war of a hundred models“ (百模大战) has begun.
The release in late August of Huawei’s new smartphone featuring the Kirin 9000s system on a chip, including a very capable GPU, highlights what is likely to be a growing dynamic. As the U.S. attempts to control access to advanced technology, it will lead to innovation in China and increasing domestic capabilities to produce systems that may not be cutting edge but are more than adequate for most users. As with the Huawei case, the biggest challenge in terms of AI hardware will come in two or three years, as Western technology advances and Chinese companies remain at some disadvantage. How much and how long that will persist will be the subject of another article in this series.
.
Next in series: Deeper dive into domestic GPU hardware suppliers.
Paul Triolo is Senior VP for China and Tech Policy Lead for Albright Stonebridge Group. Previously, he worked at Eurasia Group, where he led the firm’s newest practice, focusing on global technology policy issues. He is frequently quoted in the New York Times, Wall Street Journal, Wired, SCMP, the Economist, and other publications, and appears on CNN, CNBC, and other media outlets that follow global tech issues. Read more
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : SupChina – https://thechinaproject.com/2023/09/20/this-is-the-state-of-generative-ai-in-china/