SERIES
Artificial intelligence learned how to generate text and art better than ever before, while computer scientists developed algorithms that solved long-standing problems.
Introduction
In 2023, artificial intelligence dominated popular culture — showing up in everything from internet memes to Senate hearings. Large language models such as those behind ChatGPT fueled a lot of this excitement, even as researchers still struggled to pry open the “black box” that describes their inner workings. Image generation systems also routinely impressed and unsettled us with their artistic abilities, yet these were explicitly founded on concepts borrowed from physics.
The year brought many other advances in computer science. Researchers made subtle but important progress on one of the oldest problems in the field, a question about the nature of hard problems referred to as “P versus NP.” In August, my colleague Ben Brubaker explored this seminal problem and the attempts of computational complexity theorists to answer the question: Why is it hard (in a precise, quantitative sense) to understand what makes hard problems hard? “It hasn’t been an easy journey — the path is littered with false turns and roadblocks, and it loops back on itself again and again,” Brubaker wrote. “Yet for meta-complexity researchers, that journey into an uncharted landscape is its own reward.”
The year was also full of more discrete but still important pieces of individual progress. Shor’s algorithm, the long-promised killer app of quantum computing, got its first significant upgrade after nearly 30 years. Researchers finally learned how to find the shortest route through a general type of network nearly as fast as theoretically possible. And cryptographers, forging an unexpected connection to AI, showed how machine learning models and machine-generated content must also contend with hidden vulnerabilities and messages.
Some problems, it seems, are still beyond our ability to solve — for now.
Introduction
Hard Questions, Hard Answers
For 50 years, computer scientists have tried to solve the biggest open question in their field, known as “P versus NP.” It asks, roughly, how hard certain hard problems are. And for 50 years, their attempts have ended in failure. Many times, just as they began to make progress with a new approach, they hit a barrier proving that the tactic would never work. Eventually, they began to wonder why it’s so hard to prove that some problems are hard. Their efforts to answer such inward-looking questions have blossomed into a subfield, called meta-complexity, which has provided the greatest insights into the question yet.
In an August article and a short documentary video, Quanta explained exactly what we know, how we know it and what we’re just starting to figure out when it comes to meta-complexity. At stake is not just the curiosity of the researchers involved: Resolving P versus NP could solve countless logistical problems, render all cryptography moot, and even speak to the ultimate nature of what’s knowable and what’s forever beyond our grasp.
Introduction
The Powers of Large Language Models
Get enough stuff together, and you might be surprised by what can happen. Water molecules create waves, flocks of birds swoop and soar as one, and unconscious atoms combine into life. Scientists call these “emergent behaviors,” and they’ve recently seen the same thing happen with large language models — AI programs trained on enormous collections of text to produce humanlike writing. After they reach a certain size, these models can suddenly do unexpected things that smaller models can’t, such as solving certain math problems.
Yet the surge of interest in large language models has raised new concerns. These programs invent falsehoods, perpetrate social biases, and fail to handle even some of the most elementary elements of human language. Moreover, these programs remain a black box, their internal logic unknowable, though some researchers have ideas about how to change that.
Samuel Velasco/Quanta Magazine
Introduction
Solving Negativity
Computer scientists have long known of algorithms that can whiz through graphs — networks of nodes connected by edges — where the connections have some cost, like a toll road connecting two cities. But for decades, they couldn’t find any fast algorithm for determining the shortest path when a road could have either a cost or a reward. Late last year, a trio of researchers delivered a workable algorithm that’s nearly as fast as theoretically possible.
Then in March, researchers posted a new algorithm that can determine when two types of mathematical objects known as groups are the same in a precise way; the work may lead to algorithms that can quickly compare groups (and perhaps other objects) more generally, a surprisingly difficult task. Other big algorithm news this year included a new way of computing prime numbers by incorporating random and deterministic approaches, the refutation of a long-standing conjecture about the performance of information-limited algorithms, and an analysis that shows how an unintuitive idea can improve the performance of gradient descent algorithms, which are ubiquitous in machine learning programs and other areas.
Samuel Velasco/Quanta Magazine; source: Shutterstock
Introduction
Appreciating AI Art
Image-generating tools like DALL·E 2 exploded in popularity this year. Simply feed them a written prompt, and they’ll spit back a tableau of art depicting whatever you requested. But the work that made most of these artificial artists possible had been brewing for many years. Based on concepts from physics that describe spreading fluids, these so-called diffusion models effectively learn how to unscramble formless noise into a sharp image — as if turning back the clock on a cup of coffee to see the evenly distributed cream reconstitute into a well-defined dollop.
AI tools have also been successful in improving the fidelity of existing images, though we’re still far from the TV trope of a cop repeatedly shouting “Enhance!” More recently, researchers have turned to physical processes besides diffusion to explore new ways for machines to generate images. A newer approach governed by the Poisson equation, which describes how electric forces vary over distance, has already proved more capable of handling errors and is easier to train than diffusion models, in some cases.
Introduction
Improving the Quantum Standard
For decades, Shor’s algorithm has been the paragon of the power of quantum computers. Developed by Peter Shor in 1994, this set of instructions allows a machine that can exploit the quirks of quantum physics to break large numbers into their prime factors much faster than a regular, classical computer — potentially laying waste to much of the internet’s security systems. In August, a computer scientist developed an even faster variation of Shor’s algorithm, the first significant improvement since its invention. “I would have thought that any algorithm that worked with this basic outline would be doomed,” Shor said. “But I was wrong.”
Yet practical quantum computers are still beyond reach. In real life, tiny errors can quickly add up, ruining calculations and taking away any quantum benefits. In fact, late last year, a team of computer scientists showed that for a specific problem, a classical algorithm does roughly as well as a quantum one that includes errors. But there is hope: Work in August showed that certain error-correcting codes, known as low-density parity check codes, are at least 10 times more efficient than the current standard.
Harol Bustos for Quanta Magazine
Introduction
Hiding Secrets in AI
In an unusual finding at the intersection of cryptography and artificial intelligence, a team of computer scientists showed it was possible to insert into machine learning models certain backdoors that were practically invisible, their undetectability backed up by the same logic as the best modern encryption methods. The researchers focused on relatively simple models, so it’s unclear whether the same holds true for the more complicated models behind much of today’s AI tech. But the findings do suggest ways for future systems to guard against such security vulnerabilities, while also signaling a renewed interest in how the two fields can help each other grow.
These kinds of security issues are part of the reason Cynthia Rudin has championed using interpretable models to better understand what’s happening inside machine learning algorithms; researchers like Yael Tauman Kalai, meanwhile, have advanced our notions of security and privacy, even in the face of looming quantum technology. And a result in the related field of steganography showed how to hide a message with perfect security within machine-generated media.
Myriam Wares for Quanta Magazine
Introduction
Vector-Driven AI
As powerful as AI has become, the artificial neural networks that underpin most modern systems share two flaws: They require tremendous resources to train and operate, and it’s too easy for them to become inscrutable black boxes. Many researchers argue that perhaps it’s time for another approach. Instead of using artificial neurons that detect individual traits or characteristics, AI systems could represent concepts with endless variations of hyperdimensional vectors — arrays of thousands of numbers. This system is more versatile and better equipped to handle errors, making its computations far more efficient, and it allows researchers to work directly with the ideas and relationships these models consider, giving them greater insight into the model’s reasoning. Hyperdimensional computing is still in its infancy, but as it gets put to bigger tests, we may see the new approach start to take hold.
Next article
New Clues for What Will Happen When the Sun Eats the Earth
>>> Read full article>>>
Copyright for syndicated content belongs to the linked Source : Quanta Magazine – https://www.quantamagazine.org/the-biggest-discoveries-in-computer-science-in-2023-20231220/