That doesn’t mean you spend the whole time talking about your hobbies or families or making small talk about the weekend.
Butletting your manager into your life a little bit is important, because when there are stressful things happening, it will be much easier to ask your manager for time off or tell him what you need if he has context on you as a person.
Being an introvert is not an excuse for making no effort to treat people like real human beings, however. The bedrock of strong teams is human connection, which leads to trust.
Define test cases to ensure you're actively improving your app & not causing any regressions.
Break down one LLM call into multiple
AI systems do a lot better when you have many LLM calls chained together. i.e, instead of sending an LLM call to a model to generate code, send it to a "architect" model to generate a plan, then a "coding" model to generate code, then a "reviewer" model to verify.
Start simple (with 1 LLM call)
Then iterate with prompt engineering (few shot examples, chain of thought, descriptive prompts) before building a more complex system with chained LLM calls.
Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. It's a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.
As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.
MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need. Continue reading
RAG is a popular method that improves accuracy and relevance by finding the right information from reliable sources and transforming it into useful answers.
Large Language Models are trained on a fixed dataset, which limits their ability to handle private or recent information. They can sometimes "hallucinate", providing incorrect yet believable answers. Fine-tuning can help but it is expensive and not ideal for retraining again and again on new data. The Retrieval-Augmented Generation (RAG) framework addresses this issue by using external documents to improve the LLM's responses through in-context learning. RAG ensures that the information provided by the LLM is not only contextually relevant but also accurate and up-to-date.