* . *
  • About
  • Advertise
  • Privacy & Policy
  • Contact
Thursday, September 18, 2025
Earth-News
  • Home
  • Business
  • Entertainment
    REO to return for UI homecoming – The News-Gazette

    REO Gears Up to Ignite the Stage at UI Homecoming Celebration!

    Gen V Season 2: What is The Odessa Project? – yahoo.com

    Gen V Season 2: Unlocking the Secrets of The Odessa Project

    PENN Entertainment stock rating reiterated at Market Outperform by JMP – Investing.com

    PENN Entertainment Stock Rated a Market Outperformer by Experts

    Here’s how NJ’s once-vibrant nightclub scene was born and why it died – Bergen Record

    The Rise and Fall of New Jersey’s Once-Vibrant Nightclub Scene: What Happened?

    The Emmys are back: Viewership soars to highest numbers in 4 years – yahoo.com

    The Emmys Return with a Bang: Viewership Hits a 4-Year High

    From Spinal Tap II to Ed Sheeran : your complete entertainment guide to the week ahead – The Guardian

  • General
  • Health
  • News

    Cracking the Code: Why China’s Economic Challenges Aren’t Shaking Markets, Unlike America’s” – Bloomberg

    Trump’s Narrow Window to Spread the Truth About Harris

    Trump’s Narrow Window to Spread the Truth About Harris

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Science
  • Sports
  • Technology
    China says US TikTok deal a ‘win-win’, will review app’s technology and IP transfers – Reuters

    China Hails US TikTok Deal as a ‘Win-Win’ While Launching Review of App’s Technology and IP Transfers

    Bucking the Odds: Why Technology Companies Should Embrace Software Patents Today – Crowell & Moring LLP

    Bucking the Odds: Why Technology Companies Should Embrace Software Patents Today – Crowell & Moring LLP

    City IT presented Best of North Carolina Technology Award – RaleighNC.gov

    City IT Honored with Best of North Carolina Technology Award

    LELO Releases 2025 Futurist Report: Intergenerational Views on Relationships, Sex, and Technology – PR Newswire

    Exploring the Future: How Different Generations View Relationships, Sex, and Technology in 2025

    Will New Big Technology Engagements Reshape Innodata’s Growth Path? – Yahoo Finance

    Could New Major Tech Partnerships Propel Innodata to Unprecedented Growth?

    Unlocking AI Success: How People, Process, and Technology Form the Ultimate Triangle

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
No Result
View All Result
  • Home
  • Business
  • Entertainment
    REO to return for UI homecoming – The News-Gazette

    REO Gears Up to Ignite the Stage at UI Homecoming Celebration!

    Gen V Season 2: What is The Odessa Project? – yahoo.com

    Gen V Season 2: Unlocking the Secrets of The Odessa Project

    PENN Entertainment stock rating reiterated at Market Outperform by JMP – Investing.com

    PENN Entertainment Stock Rated a Market Outperformer by Experts

    Here’s how NJ’s once-vibrant nightclub scene was born and why it died – Bergen Record

    The Rise and Fall of New Jersey’s Once-Vibrant Nightclub Scene: What Happened?

    The Emmys are back: Viewership soars to highest numbers in 4 years – yahoo.com

    The Emmys Return with a Bang: Viewership Hits a 4-Year High

    From Spinal Tap II to Ed Sheeran : your complete entertainment guide to the week ahead – The Guardian

  • General
  • Health
  • News

    Cracking the Code: Why China’s Economic Challenges Aren’t Shaking Markets, Unlike America’s” – Bloomberg

    Trump’s Narrow Window to Spread the Truth About Harris

    Trump’s Narrow Window to Spread the Truth About Harris

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    Israel-Gaza war live updates: Hamas leader Ismail Haniyeh assassinated in Iran, group says

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    PAP Boss to Niger Delta Youths, Stay Away from the Protest

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Court Restricts Protests In Lagos To Freedom, Peace Park

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Fans React to Jazz Jennings’ Inspiring Weight Loss Journey

    Trending Tags

    • Trump Inauguration
    • United Stated
    • White House
    • Market Stories
    • Election Results
  • Science
  • Sports
  • Technology
    China says US TikTok deal a ‘win-win’, will review app’s technology and IP transfers – Reuters

    China Hails US TikTok Deal as a ‘Win-Win’ While Launching Review of App’s Technology and IP Transfers

    Bucking the Odds: Why Technology Companies Should Embrace Software Patents Today – Crowell & Moring LLP

    Bucking the Odds: Why Technology Companies Should Embrace Software Patents Today – Crowell & Moring LLP

    City IT presented Best of North Carolina Technology Award – RaleighNC.gov

    City IT Honored with Best of North Carolina Technology Award

    LELO Releases 2025 Futurist Report: Intergenerational Views on Relationships, Sex, and Technology – PR Newswire

    Exploring the Future: How Different Generations View Relationships, Sex, and Technology in 2025

    Will New Big Technology Engagements Reshape Innodata’s Growth Path? – Yahoo Finance

    Could New Major Tech Partnerships Propel Innodata to Unprecedented Growth?

    Unlocking AI Success: How People, Process, and Technology Form the Ultimate Triangle

    Trending Tags

    • Nintendo Switch
    • CES 2017
    • Playstation 4 Pro
    • Mark Zuckerberg
No Result
View All Result
Earth-News
No Result
View All Result
Home Science

Open-Source AI Is Good for Us

February 9, 2024
in Science
Open-Source AI Is Good for Us
Share on FacebookShare on Twitter

This is a guest post. For the other side of the argument about open-source AI, see the recent guest post “Open-Source AI Is Uniquely Dangerous.“

A culture war in AI is emerging between those who believe that the development of models should be restricted or unrestricted by default. In 2024, that clash is spilling over into the law, and it has major implications for the future of open innovation in AI.

The AI systems most in question are today’s generative AI models that have learned how to read, write, draw, animate, and speak, and which can be used to power tools like ChatGPT. Intertwined with the wider debate over AI regulation is a heated and ongoing disagreement over the risk of open models—models that can be used, modified, and shared by other developers—and the wisdom of releasing their distinctive settings, or “weights,” to the public.

Since the launch of powerful open models like the Llama, Falcon, Mistral, and Stable Diffusion families, critics have pressed to keep other such genies in the bottle. “Open source software and open data can be an extraordinary resource for furthering science,” wrote two U.S. senators to Meta (creator of Llama), but “centralized AI models can be more effectively updated and controlled to prevent and respond to abuse.” Think tanks and closed-source firms have called for AI development to be regulated like nuclear research, with restrictions on who can develop the most powerful AI models. Last month, one commentator argued in IEEE Spectrum that “open-source AI is uniquely dangerous,” echoing calls for the registration and licensing of AI models.

The debate is surfacing in recent efforts to regulate AI. First, the European Union has just finalized its AI Act to govern the development and deployment of AI systems. Among its most hotly contested provisions was whether to apply these rules to “free and open-source” models. Second, following President Biden’s executive order on AI, the U.S. government has begun to compel reports from the developers of certain AI models, and will soon launch a public inquiry into the regulation of “widely-available” AI models.

However our governments choose to regulate AI, we need to promote a diverse AI ecosystem: from large companies building proprietary superintelligence to everyday tinkerers experimenting with open technology. Open models are the bedrock for grassroots innovation in AI.

I serve as head of public policy for Stability AI (makers of Stable Diffusion), where I work with a small team of passionate researchers who share media and language models that are freely used by millions of everyday developers and creators around the world. My concern is that this grassroots ecosystem is uniquely vulnerable to mounting restrictions on who can develop and share models. Eventually, these regulations may lead to limits on fundamental research and collaboration in ways that erode this culture of open development, which made AI possible in the first place and helps make it safer.

Open models promote transparency and competition

Open models play a vital role in helping to drive transparency and competition in AI. Over the coming years, generative AI will support creative, analytic, and scientific applications that go far beyond today’s text and image generators; we’ll see such applications as personalized tutors, desktop healthcare assistants, and backyard film studios. These models will revolutionize essential services, reshape how we access information online, and transform our public and private institutions. In short, AI will become critical infrastructure.

As I have argued before the U.S. Congress and U.K. Parliament, the next wave of digital services should not rely solely on a few “black box” systems operated by a cluster of big tech firms. Today, our digital economy runs on opaque systems that feed us content, control our access to information, determine our exposure to advertising, and mediate our online interactions. We’re unable to inspect these systems or build competitive alternatives. If models—our AI building blocks—are owned by a handful of firms, we risk repeating what played out with the Internet.

We’ve seen what happens when critical digital infrastructure is controlled by just a few companies.

In this environment, open models play a vital role. If a model’s weights are released, researchers, developers, and authorities can “look under the hood” of these AI engines to understand their suitability and to mitigate their vulnerabilities before deploying them in real-world tools. Everyday developers and small businesses can adapt these open models to create new AI applications, tune safer AI models for specific tasks, train more representative AI models for diverse communities, or launch new AI ventures without spending tens of millions of dollars to build a model from scratch.

We know from experience that transparency and competition are the foundation for a thriving digital ecosystem. That’s why open-source software like Android powers most of the world’s smartphones, and why Linux can be found in data centers, nuclear submarines, and SpaceX rockets. Open-source software has contributed as much as US $8.8 trillion in value globally. Indeed, recent breakthroughs in AI were only possible because of open research like the transformer architecture, open code libraries like PyTorch, and open collaboration from researchers and developers around the world.

Regulations may stifle grassroots innovation

Fortunately, no government has ventured to abolish open models altogether. If anything, governments have resisted the most extreme calls to intervene. The White House declined to require premarket licenses for AI models in its executive order. And after a confrontation with its member state governments in December, the E.U. agreed to partially exempt open models from its AI Act. Meanwhile, Singapore is funding a US $52 million open-source development effort for Southeast Asia, and the UAE continues to bankroll some of the largest available open generative AI models. French President Macron has declared “on croit dans l’open-source”—we believe in open-source.

However, the E.U. and U.S. regulations could put the brakes on this culture of open development in AI. For the first time, these instruments establish a legal threshold beyond which models will be deemed “dual use” or “systemic risk” technologies. Those thresholds are based on a range of factors, including the computing power used to train the model. Models over the threshold will attract new regulatory controls, such as notifying authorities of test results and maintaining exhaustive research and development records, and they will lose E.U. exemptions for open-source development.

In one sense, these thresholds are a good faith effort to avoid overregulating AI. They focus regulatory attention on future models with unknown capabilities instead of restricting existing models. Few existing models will meet the current thresholds, and those that do first will be models from well-resourced firms that are equipped to meet the new obligations.

In another sense, however, this approach to regulation is troubling, and augurs a seismic shift in how we govern novel technology. Grassroots innovation may become collateral damage.

Regulations could hurt everyday developers

First, regulating “upstream” components like models could have a disproportionate chilling effect on research in “downstream” systems. Many of the restrictions for above-the-threshold models assume that developers are sophisticated firms with formal relationships to those who use their models. For example, the U.S. executive order requires developers to report on individuals who can access the model’s weights, and detail the steps taken to secure those weights. The E.U. legislation requires developers to conduct “state of the art” evaluations and systematically monitor for incidents involving their models.

For the first time, these instruments establish a legal threshold beyond which models will be deemed “dual use” or “systemic risk” technologies.

Yet the AI ecosystem is more than a handful of corporate labs. It also includes countless developers, researchers, and creators who can freely access, refine, and share open models. They can iterate on powerful “base” models to create safer, less biased, or more reliable “fine-tuned” models that they release back to the community.

If governments treat these everyday developers the same as the companies that first released the model, there will be problems. Developers operating from dorm rooms and dining tables won’t be able to comply with the premarket licensing and approval requirements that have been proposed in Congress, or the “one size fits all” evaluation, mitigation, and documentation requirements initially drafted by the European Parliament. And they would never contribute to model development—or any other kind of software development—if they thought a senator might hold them liable for how downstream actors use or abuse their research. Individuals releasing new and improved models on GitHub shouldn’t face the same compliance burden as OpenAI or Meta.

The thresholds for restrictions seem arbitrary

Second, the criteria underpinning these thresholds are unclear. Before we put up barriers around the development and distribution of a useful technology, governments should assess the initial risk of the technology, the residual risk after considering all available legal and technical mitigations, and the opportunity cost of getting it wrong.

Yet there is still no framework for determining whether these models actually pose a serious and unmitigated risk of catastrophic misuse, or for measuring the impact of these rules on AI innovation. The preliminary U.S. threshold—1026 floating point operations (FLOPs) in training computation—first appeared as a passing footnote in a research paper. The EU threshold of 1025 FLOPs is an order of magnitude more conservative, and didn’t appear at all until the final month of negotiation. We may cross that threshold in the foreseeable future. What’s more, both governments reserve the right to move these goalposts for any reason, potentially bringing into scope a massive number of smaller but increasingly powerful models, many of which can be run locally on laptops or smartphones.

Restrictions are based on speculative risks

Third, there is no consensus about precisely which risks justify these exceptional controls. Online safety, election disinformation, smart malware, and fraud are some of the most immediate and tangible risks posed by generative AI. Economic disruption is possible too. However, these risks are rarely invoked to justify premarket controls for other helpful software technologies with dual-use applications. Photoshop, Word, Facebook, Google Search, and WhatsApp have contributed to the proliferation of deepfakes, fake news, and phishing scams, but our first instinct isn’t to regulate their underlying C++ or Java libraries.

Instead, critics have focused on “existential risk” to make the case for regulating model development and distribution, citing the prospect of runaway agents or homebuilt weapons of mass destruction. However, as a recent paper from Stanford’s Institute for Human-Centered Artificial Intelligence (HAI) notes of these claims, “the weakness of evidence is striking.” If these arguments are to justify a radical departure from our conventional approach to regulating technology, the standard of proof should be higher than speculation.

We should regulate AI while preserving openness

There is no debate that AI should be regulated, and all actors—from model developers to application deployers—have a role to play in mitigating emerging risks. However, new rules must account for grassroots innovation in open models. Right now, well-intended efforts to regulate models run the risk of stifling open development. Taken to their extreme, these frameworks may limit access to foundational technology, saddle hobbyists with corporate obligations, or formally restrict the exchange of ideas and resources between everyday developers.

In many ways, models are regulated already, thanks to a complex patchwork of legal frameworks that governs the development and deployment of any technology. Where there are gaps in existing law—such as U.S. federal law governing abusive, fraudulent, or political deepfakes—they can and should be closed.

However, presumptive restrictions on model development should be the option of last resort. We should regulate for emerging risks while preserving the culture of open development that made these breakthroughs possible in the first place, and that drives transparency and competition in AI.

>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : IEEE – https://spectrum.ieee.org/open-source-ai-good

Tags: open-sourcescience
Previous Post

Tiny Quadrotor Learns to Fly in 18 Seconds

Next Post

Patrick Mahomes’ wife Brittany inspired by UFC legend Ronda Rousey after following in her footsteps with iconic photoshoot

New approach improves right whale distribution models – EurekAlert!

Breakthrough Technique Boosts Precision in Mapping Right Whale Populations

September 18, 2025

Are Six-Figure Tech Careers Disappearing? Why Computer Science Graduates Are Struggling to Find Even Minimum Wage Jobs

September 18, 2025
Scientists reverse stroke damage with stem cells – ScienceDaily

Breakthrough Discovery: Scientists Successfully Reverse Stroke Damage with Stem Cells

September 18, 2025
New statewide SC lifestyle show ‘Palmetto Life Weekend’ to air on WMBF News – WMBF

Discover South Carolina’s Charm: New Lifestyle Show ‘Palmetto Life Weekend’ Premieres Statewide

September 18, 2025
China says US TikTok deal a ‘win-win’, will review app’s technology and IP transfers – Reuters

China Hails US TikTok Deal as a ‘Win-Win’ While Launching Review of App’s Technology and IP Transfers

September 18, 2025

Ireland Contracting Nightly Sports Call: Sept. 17, 2025 – CBS News

September 18, 2025
Trump Designates Antifa ‘Terror Organisation’ Days After Charlie Kirk Murder – NDTV

Trump Designates Antifa ‘Terror Organisation’ Days After Charlie Kirk Murder – NDTV

September 18, 2025
Study: UIS contributes nearly $1 billion to the Illinois economy – NPR Illinois

Study: UIS contributes nearly $1 billion to the Illinois economy – NPR Illinois

September 18, 2025
REO to return for UI homecoming – The News-Gazette

REO Gears Up to Ignite the Stage at UI Homecoming Celebration!

September 18, 2025
WATCH: Former CDC doctor says U.S. is on track to see uptick in preventable diseases under Kennedy – PBS

WATCH: Former CDC Doctor Warns of Rising Preventable Diseases in the U.S

September 18, 2025

Categories

Archives

September 2025
MTWTFSS
1234567
891011121314
15161718192021
22232425262728
2930 
« Aug    
Earth-News.info

The Earth News is an independent English-language daily published Website from all around the World News

Browse by Category

  • Business (20,132)
  • Ecology (825)
  • Economy (844)
  • Entertainment (21,723)
  • General (17,103)
  • Health (9,889)
  • Lifestyle (859)
  • News (22,149)
  • People (849)
  • Politics (854)
  • Science (16,056)
  • Sports (21,344)
  • Technology (15,827)
  • World (828)

Recent News

New approach improves right whale distribution models – EurekAlert!

Breakthrough Technique Boosts Precision in Mapping Right Whale Populations

September 18, 2025

Are Six-Figure Tech Careers Disappearing? Why Computer Science Graduates Are Struggling to Find Even Minimum Wage Jobs

September 18, 2025
  • About
  • Advertise
  • Privacy & Policy
  • Contact

© 2023 earth-news.info

No Result
View All Result

© 2023 earth-news.info

No Result
View All Result

© 2023 earth-news.info

Go to mobile version