What 10 top AI stories in 2022 reveal about 2023

Image Credit: Generated with DALL-E

Check out all the on-demand sessions from the Intelligent Security Summit here.


As we look back at VentureBeat’s top AI stories of the year, it’s clear that the industry’s advances — including, notably, in generative AI — are vast and powerful, but only the beginning of what is to come. 

For example, OpenAI, the artificial intelligence research lab behind AI tools that exploded this year, including DALL-E 2 and ChatGPT, debuted buzzed-about advancements that drew attention from the general public as well as the tech industry. DALL-E’s text-to-image generation and ChatGPT‘s new capabilities to produce high-quality, long-form content made creatives question whether they will soon be out of a job — and who owns the content these tools are creating anyway?

Meanwhile, the next iteration of advancements may not be far off for OpenAI. This fall, Ray, the machine learning technology behind OpenAI’s large-scale operations, debuted its next milestone: Ray 2.0. The update will operate as a runtime layer and is designed to simplify the building and management of large AI workloads, which will allow companies like OpenAI to make even greater strides in 2023.

Though generative AI led much of this year’s trending coverage, it wasn’t the only area of AI where waves were made that had a ripple effect. Intel unveiled what it claims is the first real-time deepfake detector, which works by analyzing subtle “blood flow” in videos and produces results in minutes that are 96% accurate. It’s a tool that may become increasingly useful to maintain integrity as generative AI video and image capabilities become even more realistic. 

Event

Intelligent Security Summit On-Demand

Learn the critical role of AI & ML in cybersecurity and industry specific case studies. Watch on-demand sessions today.


Watch Here

And AI continued to seemingly “eat” the world as we know it, even the most mundane technology use cases as the most complex algorithms were reoriented with AI-powered improvements this year. Google released a beta version of Simple ML for its Google Sheets tool to revamp the platform’s capabilities for calculations and graphing, while DeepMind unveiled its first AI to power faster matrix multiplication algorithms, which some say may be used to improve the entire computer science industry.

Along with the strides made in AI this year, several companies are heading into 2023 with fewer AI employees due to layoffs as a result of the declining economy, including Meta. As part of its 11,000 layoffs, the technology and social media giant laid off an entire machine learning infrastructure team this fall — which came as a surprise, given the company said it plans to increase its focus on AI. 

While the future may be uncertain for some AI professionals in the short term, experts don’t anticipate that this will significantly impact AI’s progress in the long run. There have been arguments that AI has in some respects hit a wall, or slowed down to what one industry CEO referred to as a “Stone Age.” Others have fired back against claims like these, including renowned computer scientist and artificial neural networks pioneer Geoffrey Hinton, who told VentureBeat that the rapid progress we’re seeing in AI will continue to accelerate.

Looking ahead, Andrew Ng, founder of Landing AI and DeepLearning AI, told VentureBeat that the next decade of progress in AI will revolve heavily around its generative AI capabilities and shift toward data-centric AI. 

“As we collectively make progress on this over the next few years, I think it will enable many more AI applications, and I’m very excited about that,” Ng said in a previous interview.

Progress is certain to continue, but not without bumps in the road. As legislation around regulating AI continues to unfold, it will be important for organizations to hire executives — perhaps a chief AI officer — who are knowledgeable about its benefits, consequences and constantly evolving capabilities. Until then, progress, not perfection, is what to expect for 2023.

Here’s more from our top 10 AI stories of 2022:

  1. Andrew Ng predicts the next 10 years in AI

    George Anandiotis wrote this March 21 story, an interview with Andrew Ng, founder of Landing AI and DeepLearning AI, co-chairman and co-founder of Coursera and adjunct professor at Stanford University. Ng told VentureBeat that much of the focus on AI throughout the last decade has been on big data. In decades to come, he predicts a shift toward data-centric AI.

    “Ten years ago, I underestimated the amount of work that would be needed to flesh out deep learning, and I think a lot of people today are underestimating the amount of work … that will be needed to flesh out data-centric AI to its full potential,” Ng said. “But as we collectively make progress on this over the next few years, I think it will enable many more AI applications, and I’m very excited about that.”


  1. Meta layoffs hit an entire ML research team focused on infrastructure

    Senior writer Sharon Goldman was up late at night scrolling through Twitter on November 9, the day Meta announced it was laying off 1,000 employees. In a public statement, Mark Zuckerberg had shared a message to Meta employees that signaled, to some, that those working in artificial intelligence (AI) and machine learning (ML) might be spared the brunt of the cuts.

    However, Thomas Ahle, a Meta research scientist who was laid off, tweeted that he and the entire research organization called Probability, which focused on applying machine learning across the infrastructure stack, was cut. The team had 50 members, not including managers, he said. 


  1. OpenAI debuts ChatGPT and GPT-3.5 series as GPT-4 rumors flyAs GPT-4 rumors continued to fly at NeurIPS 2022 on November 30, OpenAI managed to take over the news with ChatGPT, a new model in the GPT-3 family of AI-powered large language models (LLMs) that reportedly improves on its predecessors by handling more complex instructions and producing higher-quality, longer-form content.

    ChatGPT has been out for only a few weeks, but hasn’t stopped making news since its release. 


  1. DeepMind unveils first AI to discover faster matrix multiplication algorithmsIt was considered one of the toughest mathematical puzzles to crack: Could AI create its own algorithms to speed up matrix multiplication, one of machine learning’s most fundamental tasks? In a paper published in Nature on October 5, research lab DeepMind unveiled AlphaTensor, the “first artificial intelligence system for discovering novel, efficient and provably correct algorithms.” The Google-owned lab said the research “sheds light” on a 50-year-old open question in mathematics about finding the fastest way to multiply two matrices.

    AlphaTensor, according to a DeepMind blog post, builds upon AlphaZero, an agent that has shown superhuman performance in board games like chess and Go. This new work takes the AlphaZero journey further, moving from playing games to tackling unsolved mathematical problems.


  1. Google brings machine learning to online spreadsheets with Simple ML for Sheets

    On December 7, Sean Michael Kerner shared the news that Google was planning to bring machine learning to its Sheets tool. While simple calculations and graphs have long been part of the spreadsheet experience, machine learning (ML) has not. ML is often seen as being too complex to use, while spreadsheets are intended to be accessible to any type of user.

    Google announced a beta release of the Simple ML for Sheets add-on. Google Sheets has an extensible architecture that enables users to benefit from add-ons that extend the application’s default functionality. In this case, Google Sheets benefits from ML technology that Google first developed in the open-source TensorFlow project. With Simple ML for Sheets, users will not need to use a specific TensorFlow service, as Google has developed the service to be as easily accessible as possible.


  1. 10 years later, deep learning ‘revolution’ rages on, say AI pioneers Hinton, LeCun and LiWhen senior writer Sharon Goldman realized that September 2022 was the 10-year anniversary of key neural network research — known as AlexNet — that led to the deep learning revolution in 2012, she reached out to AI pioneer Geoffrey Hinton. 

    With interviews with Hinton and other leading AI luminaries including Yann LeCun and Fei-Fei Li, this piece is a look back at a booming AI decade, as well as a deep dive into what’s ahead in AI.  


  1. Will OpenAI’s DALL-E 2 kill creative careers?



    OpenAI ‘s expanded beta access to DALL-E 2, its powerful image-generating AI solution, sent the tech world buzzing with excitement in late July, but also left many with questions.

    For one thing, what does the commercial use of DALL-E’s AI-powered imagery mean for creative industries and workers? Will it replace them?

    According to OpenAI, the answer is no. DALL-E is a tool that “enhances and extends the creative process,” an OpenAI spokesperson told VentureBeat. Much as an artist would look at different artworks for inspiration, DALL-E can help an artist come up with creative concepts.

    Since this article was published, debate and criticism has continued about the ownership of images generated by AI. It certainly won’t end anytime soon. 


  1. Intel unveils real-time deepfake detector, claims 96% accuracy rateOn November 16, Intel introduced FakeCatcher, which it says is the first real-time detector of deepfakes — that is, synthetic media in which a person in an existing image or video is replaced with someone else’s likeness. 

    Intel claims the product has a 96% accuracy rate and works by analyzing the subtle “blood flow” in video pixels to return results in milliseconds. 

    With deepfake threats looming, this type of deepfake detection technology is becoming ever more important. The question is, does it really work? 


  1. Who owns DALL-E images? Legal AI experts weigh inIn another installment of what has become an ongoing text-to-image generator drama, senior writer Sharon Goldman explored the legal ramifications of tools like DALL-E 2. 

    When OpenAI announced expanded beta access to DALL-E in July, the company offered paid subscription users full usage rights to reprint, sell and merchandise the images they create with the powerful text-to-image generator.

    A week later, creative professionals across industries were already buzzing with questions. Topping the list: Who owns images put out by DALL-E, or for that matter, other AI-powered text-to-image generators, such as Google’s Imagen? The owner of the AI that trains the model? Or the human who prompts the AI?

    Bradford Newman, who leads the machine learning and AI practice of global law firm Baker McKenzie, in its Palo Alto office, said the answer to the question “Who owns DALL-E images?” is far from clear. And, he emphasized, legal fallout is inevitable.


  1. Ray, the machine learning tech behind OpenAI, levels up to Ray 2.0Sean Michael Kerner wrote this August 23 piece about the infrastructure that supports OpenAI: Ray. Over the last two years, one of the most common ways for organizations to scale and run increasingly large and complex artificial intelligence workloads has been with the open-source Ray framework, used by companies from OpenAI to Shopify and Instacart. Ray enables machine learning (ML) models to scale across hardware resources, and can also be used to support MLops workflows across different ML tools. The tool’s next major milestone debuted at the Ray Summit in San Francisco. Ray 2.0 extends the technology with the new Ray AI Runtime (AIR) that is intended to work as a runtime layer for executing ML services.

VentureBeat’s mission is to be a digital town square for technical decision-makers to gain knowledge about transformative enterprise technology and transact. Discover our Briefings.

Read More

Ashleigh Hollowell