LLM Workshop: Deploying Large Language Models on GCP using RAG
Private Location (sign in to display)
53
Registered
Registration
Registration is now closed (this event already took place).
Details
This workshop is a masterclass in leveraging the power of RAG for combining the raw power of LLMs with the specificity of information retrieval. This session aims to bridge the gap between academic knowledge and practical skills in cloud-based AI.
🌟 Session Highlights:
- Model Deployment: Receive step-by-step guidance on deploying LLMs to GCP, with a focus on integrating RAG for heightened performance.
- API Connectivity: Engage in practical demonstrations on API integration to facilitate real-time AI interactions.
- Optimization Techniques: Discover methods for fine-tuning your LLM and RAG setup, optimizing for both efficiency and cost.
🧠 Why Attend?
- Transition from theory to practice with immersive, hands-on experiences in cloud AI deployments.
- Acquire forward-thinking skills pivotal for the emerging landscape of cloud services and AI technologies.
👥 Who Should Attend?
- Master’s students and aspiring professionals in Computer Science, AI, and related disciplines looking to advance their practical skill set.
- Developers and future cloud engineers aiming to deepen their expertise in AI model deployment.
- Innovators seeking to understand the synergy between AI and cloud platforms.
🎯 **Join us and connect academic learning with real-world applications. Stand at the forefront of the AI and cloud computing revolution!**
Hosted By
Co-hosted with: IE Campus Life, IE Big Data & AI Club
Contact the organizers