Talk Overview:
This intermediate-level technical talk demystifies recent GenAI advancements for software engineers who integrate AI into applications. Moving beyond hype, we explore architectural innovations (sparse attention, scaling laws), training methodologies (RLHF, instruction tuning), and practical techniques like Chain-of-Thought prompting that you can implement today.
Key Topics Covered:
- Architectural efficiency and choosing the right model for your use case
- Training techniques that make models easier to work with
- Chain-of-Thought prompting for improved reasoning
- Building AI agents with frameworks like LangChain
- Emerging standards (MCP, A2A) for simplified integration
- Extended context windows and multimodal capabilities
- Practical considerations: latency, cost, guardrails, and testing
Who Should Attend:
Mid to senior software engineers building or maintaining applications that integrate LLM APIs and GenAI capabilities.