{"id":1679,"date":"2023-11-10T08:32:55","date_gmt":"2023-11-10T08:32:55","guid":{"rendered":"https:\/\/geneea.com\/news\/?p=1679"},"modified":"2026-01-27T21:36:24","modified_gmt":"2026-01-27T21:36:24","slug":"geneeas-ai-spotlight-6","status":"publish","type":"post","link":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6","title":{"rendered":"Geneea&#8217;s AI Spotlight #6"},"content":{"rendered":"\n<p id=\"ember2515\">The sixth edition of our newsletter on Large Language Models is here.&nbsp;<\/p>\n\n\n\n<p id=\"ember2516\">Today, we take a look at&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>developments in AI infrastructure<\/li>\n\n\n\n<li>new models and prompting methods<\/li>\n\n\n\n<li>multimodal models,&nbsp;<\/li>\n\n\n\n<li>newsroom innovations, and<\/li>\n\n\n\n<li>ethical challenges.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2518\">State of AI Report&nbsp;<\/h2>\n\n\n\n<p id=\"ember2519\">The highlight of the last month is undeniably the&nbsp;<a href=\"https:\/\/www.stateof.ai\/\"><strong>State of AI Report 2023<\/strong><\/a><strong>,<\/strong>&nbsp;with numerous&nbsp;<a href=\"https:\/\/nathanbenaich.substack.com\/p\/welcome-to-the-state-of-ai-report\">written<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/www.youtube.com\/watch?v=RCRuiu-3VDU\">recorded<\/a>&nbsp;summaries available.<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>The report is split into several sections covering research, industry, politics, safety, and predictions.<\/li>\n\n\n\n<li>From a research point of view, we are looking forward to&nbsp;<strong>smaller<\/strong>&nbsp;and more capable models as well as&nbsp;<strong>multimodal models<\/strong>&nbsp;(see more below).&nbsp;<\/li>\n\n\n\n<li>The Industry section explores AI&nbsp;<strong>chips sparsity<\/strong>&nbsp;(<a href=\"https:\/\/www.linkedin.com\/pulse\/geneeas-ai-spotlight-5-geneea?trackingId=oVM0VBtNfoWzr25giTvmjA%3D%3D&amp;lipi=urn%3Ali%3Apage%3Ad_UNKNOWN_ROUTE_organization-admin.admin.index%3BdVpo9%2B9ARLWFUjyn3pmMaQ%3D%3D\">AI Spotlight #5<\/a>), which also overflows into politics.&nbsp;<\/li>\n\n\n\n<li><strong>Policies are slow<\/strong>&nbsp;to follow the trends, while many models are easy to jailbreak.<\/li>\n\n\n\n<li>If at least some of the report\u2019s predictions come true, we have a lot to look forward to in 2024.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2521\">Developments in AI Infrastructure<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Recently, we have seen several providers introducing&nbsp;<strong>serverless<\/strong>&nbsp;LLM solutions. When compared to the self-hosted alternative, this is significantly&nbsp;<strong>easier and cheaper to set up and use<\/strong>. Some of the offerings also support advanced&nbsp;<strong>customizations<\/strong>.&nbsp;<a href=\"https:\/\/blog.cloudflare.com\/workers-ai\/\">Cloudflare<\/a>, one of the providers, has also partnered with&nbsp;<a href=\"https:\/\/www.linkedin.com\/company\/huggingface\/\">Hugging Face<\/a>.<\/li>\n\n\n\n<li><a href=\"https:\/\/cloud.google.com\/blog\/products\/ai-machine-learning\/protecting-customers-with-generative-ai-indemnification\">Google<\/a>&nbsp;extends&nbsp;<strong>legal protection&nbsp;<\/strong>for users of their AI models, following&nbsp;<a href=\"https:\/\/blogs.microsoft.com\/on-the-issues\/2023\/09\/07\/copilot-copyright-commitment-ai-legal-concerns\/\">Microsoft&#8217;s<\/a>&nbsp;initiative.<\/li>\n\n\n\n<li><a href=\"https:\/\/blog.langchain.dev\/langserve-hub\/\">LangChain introduced Templates<\/a>, simplifying project creation by providing a range of&nbsp;<strong>end-to-end template architectures<\/strong>&nbsp;for various applications. Although primarily RAG-focused, the templates also cover other useful aspects like guardrails, step-back prompting, and interaction with Elasticsearch.<\/li>\n\n\n\n<li>Microsoft researchers presented&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2308.08155.pdf\"><strong>Autogen<\/strong><\/a>, an open-source&nbsp;<strong>framework for agents<\/strong>. It offers an easy setup of specific agents and diverse&nbsp;<strong>conversational patterns&nbsp;<\/strong>among the agents. Those include dynamic group chats of multiple agents moderated by an administrator agent, and an interesting three-agent setup featuring a guardian agent responsible for validating actions, safeguarding outputs, or offering operational insights.<\/li>\n\n\n\n<li><a href=\"https:\/\/www.linkedin.com\/in\/tereza-tizkova-568439174?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAACla4uABDb-a11ofbbnDXh39sRuifa3bY70\">Tereza Tizkova<\/a>&nbsp;compiled a&nbsp;<a href=\"https:\/\/medium.com\/e-two-b\/ai-agents-landscape-6ea03939296e\">comprehensive list<\/a>&nbsp;of the many agents already in existence.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2523\">News from OpenAI<\/h2>\n\n\n\n<p id=\"ember2524\">There was a flurry of\u00a0<a href=\"https:\/\/openai.com\/blog\/new-models-and-developer-products-announced-at-devday\">announcements<\/a>\u00a0at\u00a0<a href=\"https:\/\/devday.openai.com\/\">OpenAI DevDay<\/a>:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>GPT-4 Turbo<\/strong>&nbsp;is an improved version of GPT-4, with support for 128K context windows, better function call accuracy, and improved control over the output format. It is also cheaper and the training data covers events up to April 2023. GPT-3.5 Turbo got some of these improvements too.<\/li>\n\n\n\n<li><strong>Assistant API<\/strong>&nbsp;for building&nbsp;<strong>agents<\/strong>. A nice feature is the support for persistent and infinitely long threads: developers can offload&nbsp;<strong>conversation history management<\/strong>&nbsp;to OpenAI, saving space in the context window. The framework also supports three&nbsp;<strong>tools<\/strong>: a sandboxed Python interpreter, retrieval capabilities, and function calling. You can try all this in the&nbsp;<a href=\"https:\/\/platform.openai.com\/playground?mode=assistant.\">playground<\/a>.&nbsp;<\/li>\n\n\n\n<li>Enhanced support for non-text modalities: GPT-4 Turbo accepts&nbsp;<strong>image input<\/strong>, DALL\u00b7E 3 offers an API for&nbsp;<strong>image generation<\/strong>, and there is a new&nbsp;<strong>text-to-speech<\/strong>&nbsp;model.<\/li>\n\n\n\n<li>GPT-4 now supports&nbsp;<strong>fine-tuning<\/strong>, although OpenAI warns that it is more challenging than with GPT-3.5.<\/li>\n\n\n\n<li>Most of the improved models are also&nbsp;<strong>cheaper<\/strong>&nbsp;and have&nbsp;<strong>higher rate limits<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2526\">Evolving Landscape of Language Models<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>An exceptional&nbsp;<strong>opportunity for Europe<\/strong>&nbsp;in the AI field emerges with the development of the French&nbsp;<a href=\"https:\/\/mistral.ai\/news\/announcing-mistral-7b\/\"><strong>Mistral<\/strong><\/a><strong>&nbsp;model<\/strong>, created by&nbsp;<a href=\"https:\/\/medium.com\/@ignacio.de.gregorio.noblejas\/ai-company-valued-at-260-million-its-only-four-weeks-old-80b7508fcadd\">eminent ex-researchers<\/a>&nbsp;from Meta and DeepMind. This&nbsp;<strong>compact<\/strong>&nbsp;model, with only 7 billion parameters, offers low inference costs and demonstrates&nbsp;<strong>exceptional performance<\/strong>&nbsp;by outperforming LLaMA 13B in all and LLaMA 34B in several reasoning benchmarks. Following Mistral&#8217;s release, several other models based on its architecture emerged shortly thereafter, such as&nbsp;<a href=\"https:\/\/huggingface.co\/ehartford\/dolphin-2.1-mistral-7b\">Mistral fine-tuned on the Dolphin<\/a>&nbsp;dataset or&nbsp;<a href=\"https:\/\/huggingface.co\/HuggingFaceH4\/zephyr-7b-alpha\">Zephyr-7B-\u03b1<\/a>.<\/li>\n\n\n\n<li><a href=\"https:\/\/www.linkedin.com\/company\/seznam.cz\/\">Seznam.cz<\/a>, a Czech company, released&nbsp;<a href=\"https:\/\/github.com\/seznam\/czech-semantic-embedding-models\">embedding models<\/a>&nbsp;with a focus on small size and support for the Czech language.&nbsp;<\/li>\n\n\n\n<li><a href=\"https:\/\/www.linkedin.com\/company\/anthropicresearch\/\">Anthropic<\/a>&nbsp;continues in its effort to develop&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2212.08073\">Constitutional AI<\/a>, a system that wants to be proactively helpful, harmless, and honest. The system<strong>&nbsp;learns from general principles<\/strong>&nbsp;instead of relying on humans to correct it in individual cases via RLHF. Recently, they&nbsp;<a href=\"https:\/\/www.anthropic.com\/index\/collective-constitutional-ai-aligning-a-language-model-with-public-input\">surveyed 1000 Americans<\/a>&nbsp;for rules they wished for AI to follow,&nbsp;<strong>comparing<\/strong>&nbsp;these to their in-house constitution for the Claude model. The constitution aligns closely with Asimov&#8217;s first law of robotics while omitting the third law, which concerns robot&#8217;s self-preservation. You can review the differences between&nbsp;<strong>Claude&#8217;s constitution and the public rules<\/strong>&nbsp;<a href=\"https:\/\/efficient-manatee.files.svdcdn.com\/production\/images\/CCAI_public_comparison_2023-1.pdf\">here<\/a>.<\/li>\n\n\n\n<li>Additionally, they crafted an insightful&nbsp;<a href=\"https:\/\/twitter.com\/AnthropicAI\/status\/1709986951200116737\">thread<\/a>&nbsp;and&nbsp;<a href=\"https:\/\/transformer-circuits.pub\/2023\/monosemantic-features\/index.html\">article<\/a>&nbsp;on&nbsp;<strong>interpreting neurons&#8217; meanings<\/strong>&nbsp;within LLMs. These neurons usually represent a superposition of meanings. With dictionary learning, they demonstrated the extraction of specific meanings from a cluster of neurons, successfully deriving approximately 4,000 distinct features from roughly 500 neurons.<\/li>\n\n\n\n<li>Stanford researchers introduced the&nbsp;<a href=\"https:\/\/hai.stanford.edu\/news\/introducing-foundation-model-transparency-index\">FMTI index<\/a>, examining foundational&nbsp;<strong>models&#8217; transparency<\/strong>. Their assessment considers various indicators, including training data, architecture, abilities, and governing policies. The open models take the lead, with ChatGPT securing a commendable third place.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2528\">New prompting methods<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A novel&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2310.06117.pdf\"><strong>\u201cstep-back\u201d prompting<\/strong><\/a>&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2310.06117.pdf\">technique<\/a>&nbsp;developed at DeepMind demonstrates improved performance over Chain-of-Thought in several reasoning and knowledge-intensive benchmarks and also enhances RAG results. The key lies in formulating a more&nbsp;<strong>general question<\/strong>&nbsp;from the original query and using the broader answer&nbsp;<strong>to provide context<\/strong>&nbsp;for the initial query.<\/li>\n\n\n\n<li>In collaboration with Stanford, DeepMind also introduced&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2310.01714.pdf\"><strong>Analogical prompting<\/strong><\/a>, which instructs the&nbsp;<strong>LLM to create<\/strong>&nbsp;related&nbsp;<strong>few-shot exemplars<\/strong>&nbsp;independently (&#8220;Recall three distinct and pertinent problems.&#8221;) or produce a tutorial for the query&#8217;s fundamental concepts to aid in solving it. This approach simultaneously generates more relevant exemplars while reducing the effort needed to create them.<\/li>\n\n\n\n<li>DeepMind also furthered their exploration of&nbsp;<strong>optimizing prompts<\/strong>&nbsp;with LLMs (see&nbsp;<a href=\"https:\/\/www.linkedin.com\/pulse\/geneeas-ai-spotlight-5-geneea?trackingId=oVM0VBtNfoWzr25giTvmjA%3D%3D&amp;lipi=urn%3Ali%3Apage%3Ad_UNKNOWN_ROUTE_organization-admin.admin.index%3BdVpo9%2B9ARLWFUjyn3pmMaQ%3D%3D&amp;\">issue #5<\/a>) in a genetic algorithms-like manner. Using the&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2309.16797.pdf\">Promptbreeder<\/a>&nbsp;system, they initiated a population of prompts with various thinking styles and problem descriptions, along with mutation prompts. These&nbsp;<strong>mutations<\/strong>&nbsp;guide LLM&nbsp;<strong>to modify&nbsp;<\/strong>the initial&nbsp;<strong>instructions<\/strong>&nbsp;(e.g., &#8220;make it more fun&#8221;), while they are also further evolved to enhance the improvement process.<\/li>\n\n\n\n<li>Robotics researchers utilized&nbsp;<strong>coding LLMs<\/strong>&nbsp;to&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2310.12931.pdf\">devise reward functions<\/a>&nbsp;for&nbsp;<strong>reinforcement learning<\/strong>&nbsp;in manipulation tasks, employing a genetic mutation operator. Their&nbsp;<a href=\"https:\/\/eureka-research.github.io\/\">Eureka<\/a>&nbsp;algorithm streamlines human text input to&nbsp;<strong>enhance reward<\/strong>&nbsp;generation and highlights the collaborative potential among varied AI models.<\/li>\n\n\n\n<li>Meta AI devised a&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2309.11495.pdf\"><strong>Chain of verification<\/strong><\/a>&nbsp;method&nbsp;<strong>to mitigate hallucinations<\/strong>&nbsp;by generating concise questions from initial responses and independently answering them. Simpler verification questions were answered more accurately than initial queries, outperforming yes\/no questions and non-task-specific heuristics. The revised answers notably reduced hallucinations by up to 38%, particularly effective for list creation tasks.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2530\">Multimodality<\/h2>\n\n\n\n<p id=\"ember2531\">Recently, there has been a growing focus on multimodal models, notably marked by the introduction of the new&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2309.17421.pdf\"><strong>GPT-4V<\/strong><\/a><strong>&nbsp;(vision)<\/strong>&nbsp;from Microsoft. Additionally,&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2310.03744.pdf\"><strong>LLaVA<\/strong><\/a>&nbsp;underwent an&nbsp;<strong>instruction tuning<\/strong>&nbsp;enhancement to version 1.5. The OpenAI announcements mentioned above also contain some incremental improvements in the area of image and speech.<\/p>\n\n\n\n<p id=\"ember2532\">From a broader viewpoint,&nbsp;<a href=\"https:\/\/www.linkedin.com\/in\/chiphuyen?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAIQAJQBE3ykLNnsOPVvxwuuVCOir2zAjOQ\">Chip Huyen<\/a>&nbsp;extensively covered large multimodal models:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Categorized and elaborated on&nbsp;<strong>multimodal tasks<\/strong>, and emphasized models&#8217;&nbsp;<strong>performance enhancement<\/strong>&nbsp;by incorporating additional modalities.<\/li>\n\n\n\n<li>Explained the&nbsp;<strong>principles behind<\/strong>&nbsp;two prominent models \u2013&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2103.00020\"><strong>CLIP<\/strong><\/a>&nbsp;(using natural language supervision and contrastive learning)<strong>&nbsp;and<\/strong>&nbsp;<a href=\"https:\/\/arxiv.org\/abs\/2204.14198\"><strong>Flamingo<\/strong><\/a>&nbsp;(incorporating a vision encoder like CLIP and a language model to discuss the image).<\/li>\n\n\n\n<li>Offered&nbsp;<strong>valuable<\/strong>&nbsp;paper&nbsp;<strong>references<\/strong>&nbsp;concerning interesting&nbsp;<strong>research areas<\/strong>, like unifying multiple modalities into a single vector space, instruction-following for multimodal models, more efficient training, and generating multimodal outputs (such as GPT-4V&#8217;s ability to create tables in Latex).<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2534\">Newsrooms&#8217; Innovations and Exploration<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li><a href=\"https:\/\/www.opensocietyfoundations.org\/\">Open Society Foundations<\/a>&nbsp;pioneers an&nbsp;<strong>AI for Journalism Challenge<\/strong>, engaging 12 newsrooms in exploring AI applications, as reported by&nbsp;<a href=\"https:\/\/generative-ai-newsroom.com\/rising-to-the-challenge-applying-generative-ai-in-newsrooms-283d5bb3de53\">David Caswell<\/a>. We are looking forward to the projects&#8217; outcomes, which include, for example, the identification of emerging stories, the use of generative AI to broaden the reach to younger audiences, and the assessment of the impact of news on specific societies.<\/li>\n\n\n\n<li>South Africa\u2019s Daily Maverick plans to make their one-paragraph AI-generated&nbsp;<a href=\"https:\/\/www.dailymaverick.co.za\/summaries\/\"><strong>article synopsis<\/strong><\/a>&nbsp;the default option due to positive readers&#8217; feedback.&nbsp;<a href=\"https:\/\/wan-ifra.org\/2023\/09\/ai-use-cases-how-genai-summaries-are-boosting-daily-mavericks-readership\/\">The CEO said<\/a>&nbsp;that most readers only read 25% of an article, but when this group is offered a synopsis, they tend to delve into at least three more articles during their site visit.<\/li>\n\n\n\n<li>Reuters Institute&#8217;s&nbsp;<a href=\"https:\/\/reutersinstitute.politics.ox.ac.uk\/news\/chatgpt-now-online-heres-look-how-it-browses-and-reports-latest-news\">analysis of ChatGPT with Bing search<\/a>&nbsp;shows that ChatGPT is capable of maintaining&nbsp;<strong>neutrality on polarizing topics<\/strong>, while it lacks consistency in breaking news updates. In English queries on non-English events, it predominantly sources English content,&nbsp;<strong>neglecting original language sources<\/strong>.<\/li>\n<\/ul>\n\n\n\n<h2 class=\"wp-block-heading\" id=\"ember2536\">Ethical Challenges of Generated Content<\/h2>\n\n\n\n<ul class=\"wp-block-list\">\n<li>In his&nbsp;<a href=\"https:\/\/ehudreiter.com\/2023\/09\/26\/nlg-texts-should-not-upset-people\/\">blog post<\/a>,&nbsp;<a href=\"https:\/\/www.linkedin.com\/in\/ehud-reiter-331b1747?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAAn0NhUBBKnye6uyOD_ARz9xGSCfXs6dEPw\">Ehud Reiter<\/a>&nbsp;points out that generated texts, even when accurate, can sometimes&nbsp;<strong>lack the sensitivity<\/strong>&nbsp;that doctors exhibit. For example, doctors sometimes choose not to mention a highly unlikely diagnosis to avoid causing unnecessary alarm in patients and may avoid criticizing bad habits like smoking to prevent negative reactions. Moreover, LLMs may occasionally propose actions, like dietary recommendations, that individuals are incapable of carrying out, potentially impacting their self-esteem adversely.<\/li>\n\n\n\n<li><a href=\"https:\/\/ethanedwards.substack.com\/p\/large-language-models-will-be-great\">Ethan Edwards<\/a>&nbsp;explores the scalability of&nbsp;<strong>censorship facilitated by LLMs<\/strong>, highlighting their ability to evaluate every published text for potential risk. Traditionally, subversive topics must gain significant traction before censors can detect and label them for automatic identification. LLMs make it easier to identify such content before it has a chance to spread.&nbsp;<\/li>\n\n\n\n<li><a href=\"https:\/\/www.linkedin.com\/in\/msheehan2?miniProfileUrn=urn%3Ali%3Afs_miniProfile%3AACoAAA8bPZcBX-yPwI-MmFZG73eg6c3t_PkDwKk\">Matt Sheehan<\/a>&nbsp;analyses&nbsp;<strong>China&#8217;s AI regulation,<\/strong>&nbsp;looking at its components, motivation, and roots, in a&nbsp;<a href=\"https:\/\/carnegieendowment.org\/2023\/07\/10\/china-s-ai-regulations-and-how-they-get-made-pub-90117\">series of three papers<\/a>.<\/li>\n\n\n\n<li>Do the Rewards Justify the Means? The&nbsp;<a href=\"https:\/\/arxiv.org\/pdf\/2304.03279.pdf\">paper<\/a>&nbsp;(2023\/06) introduces a dataset examining&nbsp;<strong>ethical behavior<\/strong>&nbsp;through text-based decision-making games. In essence, it reveals that reinforcement learning algorithms trained on prioritizing rewards exhibit&nbsp;<strong>Machiavellian behavior<\/strong>, while GPT-4 tends to display higher moral considerations, especially when initiated with ethical prompts.<\/li>\n\n\n\n<li>The&nbsp;<a href=\"https:\/\/c2pa.org\/\">C2PA&#8217;s<\/a>&nbsp;introduction of the&nbsp;<a href=\"https:\/\/contentcredentials.org\/\">Content Credentials pin<\/a>&nbsp;offers a means to inspect the creation of images and audio files, checking for AI usage and potential tampering. Although the image database is limited (based on our exploration), this tool holds promise for verifying edit history and&nbsp;<strong>attributing credit<\/strong>. Similarly, tools like&nbsp;<a href=\"https:\/\/gptzero.me\/\">GPTzero<\/a>&nbsp;or&nbsp;<a href=\"https:\/\/sapling.ai\/ai-content-detector\">Sapling<\/a>&nbsp;can identify if a text was AI-generated.<\/li>\n\n\n\n<li><a href=\"https:\/\/www.wired.com\/story\/ai-chatbots-can-guess-your-personal-information\/\">Wired<\/a>&nbsp;reports on a study by ETH Zurich showing that&nbsp;<strong>LLMs can often deduce personal information&nbsp;<\/strong>from comments based on the details mentioned and the language used. You can compare yourself to an LLM&nbsp;<a href=\"https:\/\/llm-privacy.org\/\">here<\/a>.<\/li>\n<\/ul>\n\n\n\n<p>Please <a href=\"https:\/\/www.linkedin.com\/pulse\/geneeas-ai-spotlight-6-geneea-nogaf%3FtrackingId=TqSDfanit%252F8%252FXyTfs8dgJA%253D%253D\/?trackingId=TqSDfanit%2F8%2FXyTfs8dgJA%3D%3D\">subscribe<\/a> and stay tuned for the next issue of Geneea\u2019s AI Spotlight newsletter!<\/p>\n","protected":false},"excerpt":{"rendered":"<p>The sixth edition of our newsletter on Large Language Models is here.<\/p>\n<p>Today, we take a look at<\/p>\n<p>\u2022 developments in AI infrastructure<br \/>\n\u2022 new models and prompting methods<br \/>\n\u2022 multimodal models,<br \/>\n\u2022 newsroom innovations, and<br \/>\n\u2022 ethical challenges.<\/p>\n","protected":false},"author":15,"featured_media":1682,"comment_status":"closed","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"inline_featured_image":false,"footnotes":""},"categories":[378,374],"tags":[244,240,242],"class_list":["post-1679","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-large-language-models","category-newsletter","tag-ai","tag-generativeai","tag-newsletter"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.3 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Geneea&#039;s AI Spotlight #6 - Geneea News<\/title>\n<meta name=\"description\" content=\"LLM newsletter #6: developments in AI infrastructure, new models and prompting methods, multimodal models,\u00a0newsroom innovations, and ethical challenges.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Geneea&#039;s AI Spotlight #6 - Geneea News\" \/>\n<meta property=\"og:description\" content=\"LLM newsletter #6: developments in AI infrastructure, new models and prompting methods, multimodal models,\u00a0newsroom innovations, and ethical challenges.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6\" \/>\n<meta property=\"og:site_name\" content=\"Geneea News\" \/>\n<meta property=\"article:published_time\" content=\"2023-11-10T08:32:55+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2026-01-27T21:36:24+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/geneea.com\/news\/wp-content\/uploads\/2023\/11\/newsletter_6_robot_picture-do-news-1024x575.png\" \/>\n\t<meta property=\"og:image:width\" content=\"1024\" \/>\n\t<meta property=\"og:image:height\" content=\"575\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Marcela Soukupova\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Marcela Soukupova\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"8 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\\\/\\\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6#article\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6\"},\"author\":{\"name\":\"Marcela Soukupova\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#\\\/schema\\\/person\\\/69c8751a4c026723f4bac2e892f52cd8\"},\"headline\":\"Geneea&#8217;s AI Spotlight #6\",\"datePublished\":\"2023-11-10T08:32:55+00:00\",\"dateModified\":\"2026-01-27T21:36:24+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6\"},\"wordCount\":1837,\"publisher\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#organization\"},\"image\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/geneea.com\\\/news\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/newsletter_6_robot_picture-do-news.png\",\"keywords\":[\"AI\",\"generativeAI\",\"newsletter\"],\"articleSection\":[\"Large language models\",\"Newsletter\"],\"inLanguage\":\"en-US\"},{\"@type\":\"WebPage\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6\",\"url\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6\",\"name\":\"Geneea's AI Spotlight #6 - Geneea News\",\"isPartOf\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6#primaryimage\"},\"image\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6#primaryimage\"},\"thumbnailUrl\":\"https:\\\/\\\/geneea.com\\\/news\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/newsletter_6_robot_picture-do-news.png\",\"datePublished\":\"2023-11-10T08:32:55+00:00\",\"dateModified\":\"2026-01-27T21:36:24+00:00\",\"description\":\"LLM newsletter #6: developments in AI infrastructure, new models and prompting methods, multimodal models,\u00a0newsroom innovations, and ethical challenges.\",\"breadcrumb\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6#breadcrumb\"},\"inLanguage\":\"en-US\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6#primaryimage\",\"url\":\"https:\\\/\\\/geneea.com\\\/news\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/newsletter_6_robot_picture-do-news.png\",\"contentUrl\":\"https:\\\/\\\/geneea.com\\\/news\\\/wp-content\\\/uploads\\\/2023\\\/11\\\/newsletter_6_robot_picture-do-news.png\",\"width\":1922,\"height\":1080},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/geneeas-ai-spotlight-6#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\\\/\\\/geneea.com\\\/news\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Geneea&#8217;s AI Spotlight #6\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#website\",\"url\":\"https:\\\/\\\/geneea.com\\\/news\\\/\",\"name\":\"Geneea News\",\"description\":\"Learn more about what&#039;s happening at Geneea: new NLP features, newest case studies, tutoring projects, conferences we attended, etc.\",\"publisher\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\\\/\\\/geneea.com\\\/news\\\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"en-US\"},{\"@type\":\"Organization\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#organization\",\"name\":\"Geneea News\",\"url\":\"https:\\\/\\\/geneea.com\\\/news\\\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#\\\/schema\\\/logo\\\/image\\\/\",\"url\":\"https:\\\/\\\/geneea.com\\\/news\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/cropped-geneea-logo-50pc.png\",\"contentUrl\":\"https:\\\/\\\/geneea.com\\\/news\\\/wp-content\\\/uploads\\\/2022\\\/02\\\/cropped-geneea-logo-50pc.png\",\"width\":242,\"height\":64,\"caption\":\"Geneea News\"},\"image\":{\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#\\\/schema\\\/logo\\\/image\\\/\"}},{\"@type\":\"Person\",\"@id\":\"https:\\\/\\\/geneea.com\\\/news\\\/#\\\/schema\\\/person\\\/69c8751a4c026723f4bac2e892f52cd8\",\"name\":\"Marcela Soukupova\",\"image\":{\"@type\":\"ImageObject\",\"inLanguage\":\"en-US\",\"@id\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/44f35824640c6a5b31bfef2f478d704874dc3d81bfad511c158ab12274072e16?s=96&d=mm&r=g\",\"url\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/44f35824640c6a5b31bfef2f478d704874dc3d81bfad511c158ab12274072e16?s=96&d=mm&r=g\",\"contentUrl\":\"https:\\\/\\\/secure.gravatar.com\\\/avatar\\\/44f35824640c6a5b31bfef2f478d704874dc3d81bfad511c158ab12274072e16?s=96&d=mm&r=g\",\"caption\":\"Marcela Soukupova\"},\"sameAs\":[\"http:\\\/\\\/Marcela%20Soukupova\"],\"url\":\"https:\\\/\\\/geneea.com\\\/news\\\/author\\\/marcela-soukupova\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Geneea's AI Spotlight #6 - Geneea News","description":"LLM newsletter #6: developments in AI infrastructure, new models and prompting methods, multimodal models,\u00a0newsroom innovations, and ethical challenges.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6","og_locale":"en_US","og_type":"article","og_title":"Geneea's AI Spotlight #6 - Geneea News","og_description":"LLM newsletter #6: developments in AI infrastructure, new models and prompting methods, multimodal models,\u00a0newsroom innovations, and ethical challenges.","og_url":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6","og_site_name":"Geneea News","article_published_time":"2023-11-10T08:32:55+00:00","article_modified_time":"2026-01-27T21:36:24+00:00","og_image":[{"width":1024,"height":575,"url":"https:\/\/geneea.com\/news\/wp-content\/uploads\/2023\/11\/newsletter_6_robot_picture-do-news-1024x575.png","type":"image\/png"}],"author":"Marcela Soukupova","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Marcela Soukupova","Est. reading time":"8 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6#article","isPartOf":{"@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6"},"author":{"name":"Marcela Soukupova","@id":"https:\/\/geneea.com\/news\/#\/schema\/person\/69c8751a4c026723f4bac2e892f52cd8"},"headline":"Geneea&#8217;s AI Spotlight #6","datePublished":"2023-11-10T08:32:55+00:00","dateModified":"2026-01-27T21:36:24+00:00","mainEntityOfPage":{"@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6"},"wordCount":1837,"publisher":{"@id":"https:\/\/geneea.com\/news\/#organization"},"image":{"@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6#primaryimage"},"thumbnailUrl":"https:\/\/geneea.com\/news\/wp-content\/uploads\/2023\/11\/newsletter_6_robot_picture-do-news.png","keywords":["AI","generativeAI","newsletter"],"articleSection":["Large language models","Newsletter"],"inLanguage":"en-US"},{"@type":"WebPage","@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6","url":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6","name":"Geneea's AI Spotlight #6 - Geneea News","isPartOf":{"@id":"https:\/\/geneea.com\/news\/#website"},"primaryImageOfPage":{"@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6#primaryimage"},"image":{"@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6#primaryimage"},"thumbnailUrl":"https:\/\/geneea.com\/news\/wp-content\/uploads\/2023\/11\/newsletter_6_robot_picture-do-news.png","datePublished":"2023-11-10T08:32:55+00:00","dateModified":"2026-01-27T21:36:24+00:00","description":"LLM newsletter #6: developments in AI infrastructure, new models and prompting methods, multimodal models,\u00a0newsroom innovations, and ethical challenges.","breadcrumb":{"@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6#primaryimage","url":"https:\/\/geneea.com\/news\/wp-content\/uploads\/2023\/11\/newsletter_6_robot_picture-do-news.png","contentUrl":"https:\/\/geneea.com\/news\/wp-content\/uploads\/2023\/11\/newsletter_6_robot_picture-do-news.png","width":1922,"height":1080},{"@type":"BreadcrumbList","@id":"https:\/\/geneea.com\/news\/geneeas-ai-spotlight-6#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/geneea.com\/news"},{"@type":"ListItem","position":2,"name":"Geneea&#8217;s AI Spotlight #6"}]},{"@type":"WebSite","@id":"https:\/\/geneea.com\/news\/#website","url":"https:\/\/geneea.com\/news\/","name":"Geneea News","description":"Learn more about what&#039;s happening at Geneea: new NLP features, newest case studies, tutoring projects, conferences we attended, etc.","publisher":{"@id":"https:\/\/geneea.com\/news\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/geneea.com\/news\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/geneea.com\/news\/#organization","name":"Geneea News","url":"https:\/\/geneea.com\/news\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/geneea.com\/news\/#\/schema\/logo\/image\/","url":"https:\/\/geneea.com\/news\/wp-content\/uploads\/2022\/02\/cropped-geneea-logo-50pc.png","contentUrl":"https:\/\/geneea.com\/news\/wp-content\/uploads\/2022\/02\/cropped-geneea-logo-50pc.png","width":242,"height":64,"caption":"Geneea News"},"image":{"@id":"https:\/\/geneea.com\/news\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/geneea.com\/news\/#\/schema\/person\/69c8751a4c026723f4bac2e892f52cd8","name":"Marcela Soukupova","image":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/secure.gravatar.com\/avatar\/44f35824640c6a5b31bfef2f478d704874dc3d81bfad511c158ab12274072e16?s=96&d=mm&r=g","url":"https:\/\/secure.gravatar.com\/avatar\/44f35824640c6a5b31bfef2f478d704874dc3d81bfad511c158ab12274072e16?s=96&d=mm&r=g","contentUrl":"https:\/\/secure.gravatar.com\/avatar\/44f35824640c6a5b31bfef2f478d704874dc3d81bfad511c158ab12274072e16?s=96&d=mm&r=g","caption":"Marcela Soukupova"},"sameAs":["http:\/\/Marcela%20Soukupova"],"url":"https:\/\/geneea.com\/news\/author\/marcela-soukupova"}]}},"_links":{"self":[{"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/posts\/1679","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/users\/15"}],"replies":[{"embeddable":true,"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/comments?post=1679"}],"version-history":[{"count":4,"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/posts\/1679\/revisions"}],"predecessor-version":[{"id":1688,"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/posts\/1679\/revisions\/1688"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/media\/1682"}],"wp:attachment":[{"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/media?parent=1679"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/categories?post=1679"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/geneea.com\/news\/wp-json\/wp\/v2\/tags?post=1679"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}