# Tech Stack

AI Engineering (AIE) builds apps using existing foundation models. APIs made them easy to use. AIE exploded.

## Three Layers

Three layers make the AI stack:

<figure><img src="https://3362254923-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F3RA6F4b1kPuyIJpX1t8w%2Fuploads%2Fgit-blob-ae23a0bf6b381392618b375173585024a06388ad%2Flayers.png?alt=media" alt="ai engineer"><figcaption></figcaption></figure>

1. <mark style="background-color:purple;">Application</mark> Layer- Product companies turn capabilities into user value
2. <mark style="background-color:purple;">Infrastructure</mark> Layer - Infra companies make capabilities accessible and scalable
3. <mark style="background-color:purple;">Model</mark> Layer - Model companies create foundational capabilities

Big Tech companies and AI startups may focus on all three layers

<figure><img src="https://3362254923-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F3RA6F4b1kPuyIJpX1t8w%2Fuploads%2Fgit-blob-4ff7842e83247289b230d78e188526fb0fac66f6%2Flayers-big-tech.png?alt=media" alt="ai engineer"><figcaption></figcaption></figure>

<figure><img src="https://3362254923-files.gitbook.io/~/files/v0/b/gitbook-x-prod.appspot.com/o/spaces%2F3RA6F4b1kPuyIJpX1t8w%2Fuploads%2Fgit-blob-0224f691f0210d63b7f0d2abf81944ab0de21157%2Flayers-startup.png?alt=media" alt="ai engineer"><figcaption></figcaption></figure>

whereas AI Engineering in other companies focuses on the **Application** layer.

## Companies

**Product Companies** build user-facing apps. Deploy fast. Iterate based on feedback. Examples include [Cursor](https://www.cursor.sh/), [GitHub Copilot](https://copilot.github.com/), [Devin AI](https://www.cognition-labs.com/devin), [Lindy](https://www.lindy.ai/), [Cursor](https://www.cursor.com/), and [Bland](https://www.bland.ai/).

**Infrastructure Companies** provide tools for product teams. B2B focus. Built for scale and reliability.

* **Inference Providers** serve models through APIs. [OpenRouter](https://openrouter.ai/), [Together AI](https://www.together.ai/), [Cohere](https://cohere.com/), [Amazon Bedrock](https://aws.amazon.com/bedrock/), [Hugging Face](https://huggingface.co/), [Google Vertex AI](https://cloud.google.com/vertex-ai), [Replicate](https://replicate.com/), [Fireworks AI](https://fireworks.ai/).
* **Database Providers** serve AI-optimised retrieval. [Pinecone](https://www.pinecone.io/), [Weaviate](https://weaviate.io/), [Qdrant](https://qdrant.tech/), [Chroma](https://www.trychroma.com/).
* **Observability Providers** monitor and evaluate performance. [Galileo](https://galileo.ai/), [Phoenix](https://phoenix.arize.com/), [Langfuse](https://langfuse.com/), [Opik](https://www.comet.com/site/products/opik/), [LangSmith](https://www.langchain.com/langsmith).
* Evaluation Frameworks assess various aspects of LLM performance, such as correctness, faithfulness, and safety, using metrics like answer relevancy, hallucination detection, and task completion

**Model Companies** create foundation models. [Google](https://deepmind.google/models/gemini/), [Meta](https://www.llama.com/), [Anthropic](https://www.anthropic.com/), [OpenAI](https://openai.com/), [Mistral AI](https://mistral.ai/), [xAI](https://x.ai/), [Stability AI](https://stability.ai/).
