Satya Teja Muddada
LLM-Optimized Cloud Architectures for Scalable and Reliable AI Systems
Abstract:
Large Language Models (LLMs) have become central to the advancement of generative AI, enabling applications in natural language understanding, content generation, and enterprise automation. As these models grow in scale and complexity, conventional cloud environments often struggle to support the computational and operational demands associated with training, fine tuning, and serving them. This session explores how purpose built cloud architectures can be designed to support LLM workloads effectively.
The presentation examines cloud native architectures that integrate high performance compute resources such as GPUs and TPUs with scalable storage and low latency networking. These environments support distributed training and model parallelism, which are essential for managing models with billions of parameters. The session also discusses how orchestration platforms such as Kubernetes and distributed training frameworks allow organizations to scale infrastructure efficiently across hybrid and multi cloud environments.
In addition to infrastructure design, the talk highlights the importance of MLOps practices in enabling reliable LLM deployment. Continuous retraining, automated deployment, and monitoring processes help maintain model performance while supporting evolving data and user requirements. Integrating these practices into LLM infrastructure helps reduce operational complexity and improves the transition from model development to production use.
Attendees will gain a practical understanding of how scalability, reliability, and cost efficiency can be balanced when building cloud environments that support the full lifecycle of LLM systems. The session is intended for cloud architects, AI engineers, and technology leaders interested in building robust foundations for modern AI applications.
Profile:
Satya Teja Muddada is a highly accomplished Hybrid Cloud Architect with over 17 years of experience designing and delivering enterprise-scale cloud and infrastructure solutions. His expertise spans AWS, GCP, and Azure, where he has led large-scale transformations, cloud migrations, and modernization programs for Fortune 500 clients across industries such as automotive, energy, and financial services. Teja is recognized for building secure, scalable, and cost-efficient cloud-native and multi-cloud architectures, leveraging automation, DevOps practices, and infrastructure-as-code to accelerate delivery and ensure compliance.
With deep hands-on knowledge of containerization, orchestration, and advanced data platforms, Teja has architected solutions that integrate MLOps, real-time analytics, and data lake implementations. He has successfully guided enterprises in adopting Kubernetes, Terraform, CI/CD pipelines, and serverless frameworks to improve agility and performance. Holding 10 AWS certifications and 2 GCP certifications, along with specialized training in DevOps, data engineering, and performance optimization, he exemplifies technical excellence and continuous learning.
Beyond technical delivery, Teja is known for his leadership in mentoring teams, conducting cloud strategy workshops, and aligning architecture decisions with business priorities. His work at IBM, Deloitte, Dun & Bradstreet, Cognizant, and Capgemini showcases his ability to combine deep technical skills with client engagement, making him a trusted advisor for cloud transformation initiatives. As a speaker, he brings real-world insights from leading global organizations through their digital transformation journeys.

