vercel-logotype Logovercel-logotype Logo
    • DX Platform
      • Previews

        Helping teams ship 6× faster

      • AI

        Powering breakthroughs

    • Managed Infrastructure
      • Fluid compute

        Servers, in serverless form

      • Rendering

        Fast, scalable, and reliable

      • Observability

        Trace every step

      • Security

        Scale without compromising

    • Open Source
      • Next.js

        The native Next.js platform

      • Turborepo

        Speed with Enterprise scale

      • AI SDK

        The AI Toolkit for TypeScript

    • Use Cases
      • AI Apps

        Deploy at the speed of AI

      • Composable Commerce

        Power storefronts that convert

      • Marketing Sites

        Launch campaigns fast

      • Multi-tenant Platforms

        Scale apps with one codebase

      • Web Apps

        Ship features, not infrastructure

    • Users
      • Platform Engineers

        Automate away repetition

      • Design Engineers

        Deploy for every idea

    • Tools
      • Resource Center

        Today’s best practices

      • Marketplace

        Extend and automate workflows

      • Templates

        Jumpstart app development

      • Guides

        Find help quickly

      • Partner Finder

        Get help from solution partners

    • Company
      • Customers

        Trusted by the best teams

      • Blog

        The latest posts and changes

      • Changelog

        See what shipped

      • Press

        Read the latest news

  • Enterprise
  • Docs
  • Pricing
Log InContact
Sign Up
Sign Up
  • All Posts
  • Engineering
  • Community
  • Company News
  • Customers
  • v0
  • Changelog
  • Press
  • No results found for "".
    Try again with a different keyword.

    Featured articles

  • Mar 3

    How Fluid compute works on Vercel

    Fluid compute is Vercel’s next-generation compute model designed to handle modern workloads with real-time scaling, cost efficiency, and minimal overhead. Traditional serverless architectures optimize for fast execution, but struggle with requests that spend significant time waiting on external models or APIs, leading to wasted compute. To address these inefficiencies, Fluid compute dynamically adjusts to traffic demands, reusing existing resources before provisioning new ones. At the center of Fluid is Vercel Functions router, which orchestrates function execution to minimize cold starts, maximize concurrency, and optimize resource usage. It dynamically routes invocations to pre-warmed or active instances, ensuring low-latency execution. By efficiently managing compute allocation, the router prevents unnecessary cold starts and scales capacity only when needed. Let's look at how it intelligently manages function execution.

    Avatar for mcocirioAvatar for c-kirkland
    Mariano and Collier
  • Apr 14

    Migrating Grep from Create React App to Next.js

    Grep is extremely fast code search. You can search over a million repositories for specific code snippets, files, or paths. Search results need to appear instantly without loading spinners. Originally built with Create React App (CRA) as a fully client-rendered Single-Page App (SPA), Grep was fast—but with CRA now deprecated, we wanted to update the codebase to make it even faster and easier to maintain going forward. Here's how we migrated Grep to Next.js—keeping the interactivity of a SPA, but with the performance improvements from React Server Components.

    Avatar for ethanniserAvatar for kcorb
    Ethan and Kevin
  • Apr 7

    Protectd: Evolving Vercel’s always-on denial-of-service mitigations

    Securing web applications is core to the Vercel platform. It’s built into every request, every deployment, every layer of our infrastructure. Our always-on Denial-of-Service (DoS) mitigations have long run by default—silently blocking attacks before they ever reach your applications. Last year, we made those always-on mitigations visible with the release of the Vercel Firewall, which allows you to inspect traffic, apply custom rules, and understand how the platform defends your deployments. Now, we’re introducing Protectd, our next-generation real-time security engine. Running across all deployments, Protectd reduces mitigation times for novel DoS attacks by over tenfold, delivering faster, more adaptive protection against emerging threats. Let's take a closer look at how Protectd extends the Vercel Firewall by continuously mapping complex relationships between traffic attributes, analyzing, and learning from patterns to predict and block attacks.

    Avatar for ctgowrieAvatar for timer
    Casey and Joe

    Latest news.

  • Engineering
    Jun 9

    Building secure AI agents

    An AI agent is a language model with a system prompt and a set of tools. Tools extend the model's capabilities by adding access to APIs, file systems, and external services. But they also create new paths for things to go wrong. The most critical security risk is prompt injection. Similar to SQL injection, it allows attackers to slip commands into what looks like normal input. The difference is that with LLMs, there is no standard way to isolate or escape input. Anything the model sees, including user input, search results, or retrieved documents, can override the system prompt or event trigger tool calls. If you are building an agent, you must design for worst case scenarios. The model will see everything an attacker can control. And it might do exactly what they want.

    Avatar for cramforce
    Malte Ubl
  • Engineering
    Jun 4

    The no-nonsense approach to AI agent development

    AI agents are software systems that take over tasks made up of manual, multi-step processes. These often require context, judgment, and adaptation, making them difficult to automate with simple rule-based code. While traditional automation is possible, it usually means hardcoding endless edge cases. Agents offer a more flexible approach. They use context to decide what to do next, reducing manual effort on tedious steps while keeping a review process in place for important decisions. The most effective AI agents are narrow, tightly scoped, and domain-specific. Here's how to approach building one.

    Avatar for cramforce
    Malte Ubl
  • Engineering
    Jun 1

    Introducing the v0 composite model family

    We recently launched our AI models v0-1.5-md and v0-1.5-lg in v0.dev and v0-1.0-md via API. Today, we're sharing a deep dive into the composite model architecture behind those models. They combine specialized knowledge from retrieval-augmented generation (RAG), reasoning from state-of-the-art large language models (LLMs), and error fixing from a custom streaming post-processing model. While this may sound complex, it enables v0 to achieve significantly higher quality when generating code. Further, as base models improve, we can quickly upgrade to the latest frontier model while keeping the rest of the architecture stable.

    Avatar for aryamankAvatar for gaspar09Avatar for idopesok+2
    Aryaman, Gaspar, and 2 others
  • Engineering
    Apr 18

    Becoming an AI engineering company

    In today's rapidly evolving tech landscape, AI has moved from research labs to everyday tools with stunning speed. I wanted to share my perspective, not only as a CTO at Vercel, but as an engineer who's seen a few revolutions over the past 30 years.

    Avatar for cramforce
    Malte Ubl
  • Engineering
    Apr 14

    Migrating Grep from Create React App to Next.js

    Grep is extremely fast code search. You can search over a million repositories for specific code snippets, files, or paths. Search results need to appear instantly without loading spinners. Originally built with Create React App (CRA) as a fully client-rendered Single-Page App (SPA), Grep was fast—but with CRA now deprecated, we wanted to update the codebase to make it even faster and easier to maintain going forward. Here's how we migrated Grep to Next.js—keeping the interactivity of a SPA, but with the performance improvements from React Server Components.

    Avatar for ethanniserAvatar for kcorb
    Ethan and Kevin
  • Engineering
    Apr 9

    Introducing Chat SDK

    The AI SDK powers incredible applications across the web, and today we're announcing the Chat SDK—a best-in-class, production-ready template for building conversational AI applications like ChatGPT or Claude artifacts.

    Avatar for jaredAvatar for jrmy
    Jared and Jeremy
  • Engineering
    Apr 7

    Protectd: Evolving Vercel’s always-on denial-of-service mitigations

    Securing web applications is core to the Vercel platform. It’s built into every request, every deployment, every layer of our infrastructure. Our always-on Denial-of-Service (DoS) mitigations have long run by default—silently blocking attacks before they ever reach your applications. Last year, we made those always-on mitigations visible with the release of the Vercel Firewall, which allows you to inspect traffic, apply custom rules, and understand how the platform defends your deployments. Now, we’re introducing Protectd, our next-generation real-time security engine. Running across all deployments, Protectd reduces mitigation times for novel DoS attacks by over tenfold, delivering faster, more adaptive protection against emerging threats. Let's take a closer look at how Protectd extends the Vercel Firewall by continuously mapping complex relationships between traffic attributes, analyzing, and learning from patterns to predict and block attacks.

    Avatar for ctgowrieAvatar for timer
    Casey and Joe
  • Engineering
    Mar 25

    Postmortem on Next.js Middleware bypass

    Last week, we published CVE-2025-29927 and patched a critical severity vulnerability in Next.js. Here’s our post-incident analysis and next steps.

    Avatar for ty-sbano
    Ty Sbano
  • Engineering
    Mar 21

    AI SDK 4.2

    The AI SDK is an open-source toolkit for building AI applications with JavaScript and TypeScript. Its unified provider API allows you to use any language model and enables powerful UI integrations into leading web frameworks such as Next.js and Svelte.

    Avatar for lgrammelAvatar for jaredAvatar for nicoalbanese
    Lars, Jared, and Nico
  • Engineering
    Mar 7

    Personalization strategies that power ecommerce growth

    Personalization works best when it’s intentional. Rushing into it without the right approach can lead to higher costs, slower performance, and poor user experience. The key is to implement incrementally, with the right tools, while maintaining performance. When personalization is implemented effectively, it drives real business results, returning $20 for every $1 spent and driving 40% more revenue. Let's look at what personalization is, how to implement it correctly, and why Next.js and Vercel achieve optimal outcomes.

    Avatar for c-kirkland
    Collier Kirkland
  • Engineering
    Mar 3

    How Fluid compute works on Vercel

    Fluid compute is Vercel’s next-generation compute model designed to handle modern workloads with real-time scaling, cost efficiency, and minimal overhead. Traditional serverless architectures optimize for fast execution, but struggle with requests that spend significant time waiting on external models or APIs, leading to wasted compute. To address these inefficiencies, Fluid compute dynamically adjusts to traffic demands, reusing existing resources before provisioning new ones. At the center of Fluid is Vercel Functions router, which orchestrates function execution to minimize cold starts, maximize concurrency, and optimize resource usage. It dynamically routes invocations to pre-warmed or active instances, ensuring low-latency execution. By efficiently managing compute allocation, the router prevents unnecessary cold starts and scales capacity only when needed. Let's look at how it intelligently manages function execution.

    Avatar for mcocirioAvatar for c-kirkland
    Mariano and Collier
  • Engineering
    Jan 30

    ISR on Vercel is now faster and more cost-efficient

    When Next.js introduced Incremental Static Regeneration (ISR) in 2020, it changed how developers build for the web. ISR combines the speed of static generation with the flexibility of dynamic rendering, enabling sites to update content without requiring full rebuilds. Vercel has supported ISR from day one, making it easy for teams at The Washington Post, Algolia, and Sonos to serve fresh content while keeping page loads fast.

    Avatar for lubakravcheAvatar for malavikabalatdzAvatar for greetah
    Luba, Malavika, and Greta

Ready to deploy? Start building with a free account. Speak to an expert for your Pro or Enterprise needs.

Start Deploying
Talk to an Expert

Explore Vercel Enterprise with an interactive product tour, trial, or a personalized demo.

Explore Enterprise

Products

  • AI
  • Enterprise
  • Fluid Compute
  • Next.js
  • Observability
  • Previews
  • Rendering
  • Security
  • Turbo
  • v0

Resources

  • Community
  • Docs
  • Guides
  • Help
  • Integrations
  • Pricing
  • Resources
  • Solution Partners
  • Startups
  • Templates

Company

  • About
  • Blog
  • Careers
  • Changelog
  • Contact Us
  • Customers
  • Partners
  • Privacy Policy

Social

  • GitHub
  • LinkedIn
  • Twitter
  • YouTube

Loading status…

Select a display theme: