Skip to content

2024

The purpose of 1-1s

First, they create human connection between you and your manager.

  1. That doesn’t mean you spend the whole time talking about your hobbies or families or making small talk about the weekend.
  2. But letting your manager into your life a little bit is important, because when there are stressful things happening, it will be much easier to ask your manager for time off or tell him what you need if he has context on you as a person.
  3. Being an introvert is not an excuse for making no effort to treat people like real human beings, however. The bedrock of strong teams is human connection, which leads to trust.

Top best practices for building production-ready AI apps

  1. Build evals
  2. Define test cases to ensure you're actively improving your app & not causing any regressions.

  3. Break down one LLM call into multiple

  4. AI systems do a lot better when you have many LLM calls chained together. i.e, instead of sending an LLM call to a model to generate code, send it to a "architect" model to generate a plan, then a "coding" model to generate code, then a "reviewer" model to verify.

  5. Start simple (with 1 LLM call)

  6. Then iterate with prompt engineering (few shot examples, chain of thought, descriptive prompts) before building a more complex system with chained LLM calls.

Globalizing Your Startup/One Person Company

TL;DR

  1. Building in Public
  2. Individual Entrepreneurship is a Future Trend
  3. AI and remote work make one-person companies more feasible.
  4. No Need to Overly Rely on Funding; Start with Lean Startup Methods
  5. Register Platforms and Services as a Company, Not as an Individual
  6. Using a company entity can mitigate the risk of unlimited personal liability.

How Elon Musk is so effective

Operating philosophy

  • Relentlessly focused on weekly progress
  • The Elon method boiled all the way down is "what have you gotten done this week?"
  • Rejects traditional corporate timelines and long-term planning over months and years

Engineering-first approach

  • Works predominantly with engineers
  • Personally understands all technical systems
  • Avoids non-engineering meetings/conversations when possible, joins all the core engineering focused meetings
  • Skips management layers completely
  • Talks directly to person in charge of specific project
  • He's in there with 24-year-old engineers and they'll just walk through fire for him
  • Engineers deeply respect his technical knowledge

Cursor AI Done Right: Lessons from Building Multiple MVPs

Cursor is really dumb if not given enough context about your project. Here what you can do to improve your Cursor workflow

1. Brainstorm first, code second

Claude/o1 are your best friends here. You should create a whole document containing every single detail of your project.

  • core features
  • goals & objectives
  • tech stack & packages
  • project folder structure
  • database design
  • landing page components
  • color palette
  • copywriting

All this should be put into an instruction.md (name it however you want) so Cursor can index at any time.

What's Model Context Protocol (MCP)

Introduction

Model Context Protocol (MCP) is an open protocol that standardizes how applications provide context to LLMs. It's a new standard for connecting AI assistants to the systems where data lives, including content repositories, business tools, and development environments. Its aim is to help frontier models produce better, more relevant responses.

Why MCP?

As AI assistants gain mainstream adoption, the industry has invested heavily in model capabilities, achieving rapid advances in reasoning and quality. Yet even the most sophisticated models are constrained by their isolation from data—trapped behind information silos and legacy systems. Every new data source requires its own custom implementation, making truly connected systems difficult to scale.

MCP addresses this challenge. It provides a universal, open standard for connecting AI systems with data sources, replacing fragmented integrations with a single protocol. The result is a simpler, more reliable way to give AI systems access to the data they need. ​

What is RAG?

Introduction🚀

RAG is a popular method that improves accuracy and relevance by finding the right information from reliable sources and transforming it into useful answers.

Large Language Models are trained on a fixed dataset, which limits their ability to handle private or recent information. They can sometimes "hallucinate", providing incorrect yet believable answers. Fine-tuning can help but it is expensive and not ideal for retraining again and again on new data. The Retrieval-Augmented Generation (RAG) framework addresses this issue by using external documents to improve the LLM's responses through in-context learning. RAG ensures that the information provided by the LLM is not only contextually relevant but also accurate and up-to-date.

final diagram

There are four main components in RAG: