Asian Spectator

The Times Real Estate

.

Language AIs in 2024: Size, guardrails and steps toward AI agents

  • Written by John Licato, Associate Professor of Computer Science, Director of AMHR Lab, University of South Florida
Language AIs in 2024: Size, guardrails and steps toward AI agents

I research[1] the intersection of artificial intelligence, natural language processing and human reasoning as the director of the Advancing Human and Machine Reasoning lab[2] at the University of South Florida. I am also commercializing this research in an AI startup[3] that provides a vulnerability scanner for language models.

From my vantage point, I observed significant developments in the field of AI language models in 2024, both in research and the industry.

Perhaps the most exciting of these are the capabilities of smaller language models, support for addressing AI hallucination, and frameworks for developing AI agents[4].

Small AIs make a splash

At the heart of commercially available generative AI products like ChatGPT are large language models, or LLMs, which are trained on vast amounts of text and produce convincing humanlike language. Their size is generally measured in parameters[5], which are the numerical values a model derives from its training data. The larger models like those from the major AI companies have hundreds of billions of parameters.

There is an iterative interaction between large language models and smaller language models[6], which seems to have accelerated in 2024.

First, organizations with the most computational resources experiment with and train increasingly larger and more powerful language models. Those yield new large language model capabilities, benchmarks, training sets and training or prompting tricks. In turn, those are used to make smaller language models – in the range of 3 billion parameters or less – which can be run on more affordable computer setups, require less energy and memory to train, and can be fine-tuned with less data.

No surprise, then, that developers have released a host of powerful smaller language models – although the definition of small keeps changing: Phi-3[7] and Phi-4[8] from Microsoft, Llama-3.2 1B and 3B[9], and Qwen2-VL-2B[10] are just a few examples.

These smaller language models can be specialized for more specific tasks, such as rapidly summarizing a set of comments or fact-checking text against a specific reference. They can work with their larger cousins[11] to produce increasingly powerful hybrid systems.

What are small language model AIs – and why would you want one?

Wider access

Increased access to highly capable language models large and small can be a mixed blessing. As there were many consequential elections around the world in 2024, the temptation for the misuse of language models was high.

Language models can give malicious users the ability to generate social media posts and deceptively influence public opinion. There was a great deal of concern[12] about this threat in 2024, given that it was an election year in many countries.

And indeed, a robocall faking President Joe Biden’s voice asked New Hampshire Democratic primary voters to stay home[13]. OpenAI had to intervene to disrupt over 20 operations and deceptive networks[14] that tried to use its models for deceptive campaigns. Fake videos and memes were created and shared[15] with the help of AI tools.

Despite the anxiety surrounding AI disinformation[16], it is not yet clear what effect these efforts actually had[17] on public opinion and the U.S. election. Nevertheless, U.S. states passed a large amount of legislation in 2024[18] governing the use of AI in elections and campaigns.

Misbehaving bots

Google started including AI overviews[19] in its search results, yielding some results that were hilariously and obviously wrong – unless you enjoy glue in your pizza[20]. However, other results may have been dangerously wrong, such as when it suggested mixing bleach and vinegar[21] to clean your clothes.

Large language models, as they are most commonly implemented, are prone to hallucinations[22]. This means that they can state things that are false or misleading, often with confident language. Even though I and others[23] continually beat the drum about this, 2024 still saw many organizations learning about the dangers of AI hallucination the hard way.

Despite significant testing, a chatbot playing the role of a Catholic priest advocated for baptism via Gatorade[24]. A chatbot advising on New York City laws and regulations[25] incorrectly said it was “legal for an employer to fire a worker who complains about sexual harassment, doesn’t disclose a pregnancy or refuses to cut their dreadlocks.” And OpenAI’s speech-capable model forgot whose turn it was to speak and responded to a human in her own voice[26].

Fortunately, 2024 also saw new ways to mitigate and live with AI hallucinations. Companies and researchers are developing tools for making sure AI systems follow given rules pre-deployment[27], as well as environments to evaluate them[28]. So-called guardrail frameworks[29] inspect large language model inputs and outputs in real time, albeit often by using another layer of large language models.

And the conversation on AI regulation accelerated[30], causing the big players in the large language model space to update their policies on responsibly scaling[31] and harnessing AI[32].

But although researchers are continually finding ways to reduce hallucinations[33], in 2024, research convincingly showed[34] that AI hallucinations are always going to exist in some form[35]. It may be a fundamental feature of what happens when an entity has finite computational and information resources. After all, even human beings are known to confidently misremember and state falsehoods[36] from time to time.

The rise of agents

Large language models, particularly those powered by variants of the transformer architecture[37], are still driving the most significant advances in AI. For example, developers are using large language models to not only create chatbots, but to serve as the basis of AI agents. The term “agentic AI” shot to prominence in 2024[38], with some pundits even calling it the third wave[39] of AI.

To understand what an AI agent[40] is, think of a chatbot expanded in two ways: First, give it access to tools that provide the ability to take actions[41]. This might be the ability to query an external search engine, book a flight or use a calculator. Second, give it increased autonomy, or the ability to make more decisions on its own.

For example, a travel AI chatbot might be able to perform a search of flights based on what information you give it, but a tool-equipped travel agent might plan out an entire trip itinerary, including finding events, booking reservations and adding them to your calendar.

AI agents can perform multiple steps of a task on their own.

In 2024, new frameworks for developing AI agents emerged. Just to name a few, LangGraph[42], CrewAI[43], PhiData[44] and AutoGen/Magentic-One[45] were released or improved in 2024.

Companies are just beginning to adopt[46] AI agents. Frameworks for developing AI agents are new and rapidly evolving. Furthermore, security, privacy and hallucination risks are still a concern.

But global market analysts forecast this to change[47]: 82% of organizations surveyed plan to use agents within 1-3 years[48], and 25% of all companies currently using generative AI[49] are likely to adopt AI agents in 2025.

References

  1. ^ I research (scholar.google.com)
  2. ^ Advancing Human and Machine Reasoning lab (github.com)
  3. ^ AI startup (www.actualization.ai)
  4. ^ AI agents (theconversation.com)
  5. ^ parameters (www.thecloudgirl.dev)
  6. ^ large language models and smaller language models (www.youtube.com)
  7. ^ Phi-3 (news.microsoft.com)
  8. ^ Phi-4 (techcommunity.microsoft.com)
  9. ^ Llama-3.2 1B and 3B (huggingface.co)
  10. ^ Qwen2-VL-2B (huggingface.co)
  11. ^ work with their larger cousins (aclanthology.org)
  12. ^ great deal of concern (campaignlegal.org)
  13. ^ to stay home (www.nbcnews.com)
  14. ^ disrupt over 20 operations and deceptive networks (openai.com)
  15. ^ created and shared (www.npr.org)
  16. ^ anxiety surrounding AI disinformation (washingtonstatestandard.com)
  17. ^ not yet clear what effect these efforts actually had (time.com)
  18. ^ legislation in 2024 (www.ncsl.org)
  19. ^ AI overviews (blog.google)
  20. ^ glue in your pizza (www.forbes.com)
  21. ^ mixing bleach and vinegar (www.salon.com)
  22. ^ prone to hallucinations (doi.org)
  23. ^ others (www.youtube.com)
  24. ^ advocated for baptism via Gatorade (www.businessinsider.com)
  25. ^ advising on New York City laws and regulations (apnews.com)
  26. ^ responded to a human in her own voice (arstechnica.com)
  27. ^ follow given rules pre-deployment (doi.org)
  28. ^ environments to evaluate them (doi.org)
  29. ^ guardrail frameworks (techcrunch.com)
  30. ^ on AI regulation accelerated (www.ncsl.org)
  31. ^ responsibly scaling (www.anthropic.com)
  32. ^ harnessing AI (openai.com)
  33. ^ ways to reduce hallucinations (doi.org)
  34. ^ convincingly showed (doi.org)
  35. ^ hallucinations are always going to exist in some form (doi.org)
  36. ^ confidently misremember and state falsehoods (health.clevelandclinic.org)
  37. ^ transformer architecture (dl.acm.org)
  38. ^ shot to prominence in 2024 (trends.google.com)
  39. ^ third wave (www.forbes.com)
  40. ^ AI agent (theconversation.com)
  41. ^ ability to take actions (python.langchain.com)
  42. ^ LangGraph (www.langchain.com)
  43. ^ CrewAI (www.crewai.com)
  44. ^ PhiData (www.phidata.com)
  45. ^ AutoGen/Magentic-One (www.microsoft.com)
  46. ^ beginning to adopt (www.forbes.com)
  47. ^ forecast this to change (www.analyticsvidhya.com)
  48. ^ plan to use agents within 1-3 years (www.capgemini.com)
  49. ^ 25% of all companies currently using generative AI (www2.deloitte.com)

Authors: John Licato, Associate Professor of Computer Science, Director of AMHR Lab, University of South Florida

Read more https://theconversation.com/language-ais-in-2024-size-guardrails-and-steps-toward-ai-agents-245646

Magazine

Cemas berat badan naik saat libur Natal Tahun Baru? Siasati dengan 4 cara ini

Fractal Pictures/ ShutterstockMakan bersama saat musim liburan, seperti Natal dan Tahun Baru bisa menjadi momen yang menyenangkan. Namun, bagi kamu yang mengalami kecemasan terhadap makanan karena mas...

Pilkada 2024: Kemenangan rakyat atau Jokowi-Prabowo?

Presiden Prabowo Subianto (kiri), mantan Presiden Joko Widodo (tengah) dan Wakil Presiden Gibran Rakabuming Raka menghadiri Peringatan 79 tahun TNI.Donny Hery/ShutterstockPilkada Serentak 2024 menjadi...

Tahukah kamu bahwa Sinterklas terinspirasi dari sosok nyata yang melawan ketidakadilan?

Sebuah patung St Nicholas, yang menjadi salah satu santo paling populer di dunia.Tatanya Blinova/ShutterstockNama Sinterklas atau Santa Claus diambil dari nama tokoh sejarah nyata, St Nicholas, yang m...