According to an analyst firm’s Tuesday prediction, the vibrant field of generative artificial intelligence is on the brink of a reality check in the upcoming year. They cite diminishing excitement surrounding the technology, increasing operational expenses, and mounting calls for regulatory measures as indications that AI could experience a forthcoming deceleration.
In their yearly assessment of future trends in the technology industry for 2024 and beyond, CCS Insight made numerous forecasts about the path ahead for AI, a technology that has captured widespread attention due to its potential and challenges.
CCS Insight’s primary projection for 2024 revolves around generative AI receiving a reality check, where the allure of the technology is replaced by a sobering awareness of its actual costs, risks, and complexities.
Ben Wood, the chief analyst at CCS Insight, emphasized that currently, generative AI is a hot topic, with major players like Google, Amazon, Qualcomm, and Meta heavily involved.
While CCS Insight is a strong supporter of AI, believing in its substantial potential economic and societal impacts, they believe that the 2023 hype surrounding generative AI has been excessive.
Challenges in Deploying Generative AI
According to Wood, this hyperbole needs to be tempered because there are substantial hurdles to overcome before this technology can successfully enter the market.
Generative AI models like OpenAI’s ChatGPT, Google Bard, Anthropic’s Claude, and Synthesia heavily rely on substantial computational power to execute intricate mathematical algorithms that enable them to generate responses to user prompts effectively.

To operate AI applications, companies must procure high-performance chips. In the context of generative AI, large and small developers alike often opt for advanced graphics processing units (GPUs) produced by the U.S. semiconductor leader Nvidia.
Today, an increasing number of companies, including Amazon, Google, Alibaba, Microsoft, Meta, and reportedly OpenAI, are developing their specialized AI chips tailored for running these AI programs.
Ben Wood pointed out, “The cost of deploying and maintaining generative AI is simply staggering,” and he added, “While it may be manageable for these massive corporations, many organizations and developers may find it prohibitively expensive.”
EU AI Regulation Faces Obstacles
Challenges lie ahead for AI regulation in the European Union, even though the EU has typically led the way in technology-related legislation.
The EU is expected to be the first to enact specific AI regulations, but these regulations are likely to undergo multiple revisions and updates due to the rapid pace of AI advancement, as noted by CCS Insight’s analysts.
Ben Wood predicted that the legislation won’t be finalized until late 2024, which means that the industry will need to take initial steps toward self-regulation.
Generative AI has garnered significant attention this year from technology enthusiasts, venture capitalists, and corporate boardrooms due to its capacity to create new content in a human-like manner in response to text-based prompts.
This technology has been employed to generate a wide range of content, spanning from song lyrics mimicking Taylor Swift’s style to complete college essays.
While it clearly showcases the vast potential of AI, it has also raised apprehensions among government authorities and the general public. There is growing concern that this technology has advanced to the point where it could jeopardize employment opportunities for people.
Many governments are advocating for the implementation of regulations on AI. For instance, within the European Union, efforts are underway to pass the AI Act, a significant piece of legislation. This act aims to introduce a risk-based approach to regulating AI, which means that certain AI technologies, such as live facial recognition, may face outright prohibition.
Debates Over Regulation for LLM Based Generative AI Tools
Regarding large language model-based generative AI tools, like OpenAI’s ChatGPT, developers are required to subject these models to independent assessments before making them available to the general public. This approach has sparked controversy within the AI community, with some considering the plans overly restrictive.
The companies responsible for several major foundational AI models have expressed their support for regulation, emphasizing the importance of subjecting the technology to scrutiny and establishing safeguards. However, their proposed approaches to regulating AI vary.
In June, OpenAI’s CEO, Sam Altman, advocated for the appointment of an independent government official to address the complexities of AI and oversee the licensing of the technology.
In contrast, Google, in comments submitted to the National Telecommunications and Information Administration, expressed a preference for a “multi-layered, multi-stakeholder approach to AI governance.”
AI-generated Content Warnings
A search engine is planning to incorporate them. These warnings will notify users when the material they are viewing originates from an AI source rather than human creators, as forecasted by CCS Insight.
Every day, a multitude of AI-generated news articles are being published, often riddled with inaccuracies and misinformation. According to NewsGuard, a rating system for news and information websites, there are currently 49 news sites that exclusively feature content generated by AI software.
CCS Insight anticipates that such developments will prompt an internet search company to introduce labels, akin to “watermarking” in the industry, to denote content created by AI. This approach is reminiscent of how social media platforms introduced information labels on posts related to Covid-19 to counteract misinformation about the virus.
AI Crime Doesn’t Pay
In the coming year, CCS Insight anticipates that law enforcement will begin making arrests of individuals engaged in AI-based identity fraud.
According to the company, as early as 2024, the police will apprehend the first person who employs AI to impersonate someone, whether through voice synthesis technology or other forms of “deepfakes.”
CCS Insight explained in its predictions list, “Foundational models for image generation and voice synthesis can be tailored to mimic a specific individual using publicly available social media data, making it possible to produce convincing and cost-effective deepfakes.”
The potential consequences of such actions are far-reaching, encompassing harm to personal and professional relationships, as well as fraud in various sectors such as banking, insurance, and benefits.
Conclusion
In 2024, AI faces a reality check, with concerns about hype, costs, and regulations. Generative AI’s computational demands raise challenges, as tech giants design custom chips. The EU leads AI regulation, but revisions may be needed. Controversies surround AI oversight, while content warnings and arrests for AI-related crimes emerge as trends.