The Future of Generative AI & Keeping LLMs Relevant using RAG

The Future of Generative AI & Keeping LLMs Relevant using RAG

Join us on May 29, 2024, to explore the exciting world of Generative AI and how to keep LLMs relevant with RAG techniques.

By Copenhagen Fintech

Date and time

Wednesday, May 29 · 3 - 6pm CEST

Location

Trifork A/S

9 Sankt Knuds Torv #2. sal 8000 Aarhus Denmark

About this event

  • 3 hours

Hosts: Trifork , Microsoft and Copenhagen Fintech

Event Location: Trifork, Sankt Knuds Torv 9, 2nd floor, 8000 Aarhus C, (entrance near Netto, follow the signs)

Date/Time: Wednesday, May 29th 2024,15:00 - 18:00 CET

Who this event is for: CTO's and/or Fintech CTO's, developers, lead architects and AI engineers etc. - please still feel free to sign up if you don't formally fit into one of the titles above. We'd still love to meet you and share the knowledge on.

In a world where LLMs like ChatGPT and others are widely used but often limited by their training data, it is crucial to explore innovative approaches to bridge the gap between their knowledge and real-time information.

Meet us in Aarhus together with our partners, Trifork and Microsoft, where we will spend a couple of hours in the afternoon discussing the transformative potential and how-to of RAG in Language Model Systems (LLMs) in shaping the future of AI technology within the financial services ecosystem, and keep LLMs relevant and up to date with the latest information.

Our speakers will shed light on inherent challenges faced by LLMs, and how RAG addresses challenges like outdated information and hallucination, empowering Fintech founders to develop knowledge-aware applications for enhanced decision-making and customer experiences.

In addition, we will cover insights into how RAG empowers the development of knowledge-aware applications, enabling LLMs to navigate diverse domains with depth and agility, despite the static nature of their training data.

We look forward to seeing you there.


🔊 Speakers & Program 🔊


Welcome & Opening Remarks by Trifork, Microsoft and Copenhagen Fintech


Exploring Advanced RAG Techniques with David Carlos Zachariae, a seasoned software developer at Trifork with a passion for cutting-edge technologies and Large Language Models (LLMs).

With the proliferation of Large Language Models (LLMs) such as ChatGPT, companies and applications are increasingly leveraging AI capabilities. However, a common challenge arises: LLMs lack specific or sensitive knowledge that they were not explicitly trained on. Retrieval Augmented Generation (RAG) emerges as a powerful solution to bridge this gap.RAG addresses this limitation by incorporating context from external knowledge bases into LLM-generated responses, enabling more informed and context-aware responses.

By retrieving relevant information during the generation process, RAG significantly enhances the quality of LLM-generated content. Notably, RAG has become the dominant architecture for LLM-based systems, powering applications with AI capabilities.In this presentation, we start by explaining the basic architecture and delving into the shortcomings of the baseline RAG architecture. Then we delve into advanced RAG techniques, aiming to equip developers with a comprehensive toolbox for building high-performing RAG applications.

We explore the following:

  1. Modular RAG: A flexible approach that combines multiple techniques in varying orders. By assembling modules, developers can tailor RAG to specific use cases.
  2. Query rewriting, expansion and routing: Leveraging the power of LLMs to refine their own queries. This technique optimizes the retrieval process, leading to more accurate context incorporation.
  3. Reranking: Enhancing the results of the retrieval phase by intelligently reordering retrieved information. Reranking ensures that the most relevant context influences the generated output.
  4. Fine-tuning: Customizing LLMs specifically for RAG tasks. Fine-tuning aligns the model with the nuances of retrieval and generation.
  5. GraphRAG: Harnessing knowledge graphs to enrich context retrieval. Graph-based approaches offer a structured way to connect LLMs with external information.
We delve into the benefits of each technique, explore use cases, and provide theoretical foundations. Additionally, practical examples will illustrate how developers can effectively implement these methods.

Q&A

Microsoft's perspective with Simona Toader, Cloud Solutions Architect.

Q&A

Networking over Drinks, Snacks & LLMs Trivia

18.00 Thank you!


Organized by

Anchored in the Nordic region’s renowned design and digital traditions, Copenhagen Fintech strives to support human-centric financial solutions with potential to shape our global society.