Zum Inhalt springen

Content collections

Dieser Inhalt ist noch nicht in deiner Sprache verfügbar.

Hinzugefügt in: astro@2.0.0

Content collections are the best way to manage sets of content in any Astro project: blog posts, product descriptions, character profiles, recipes, or any structured content. Collections help to organize and query your documents, enable Intellisense and type checking in your editor, and provide automatic TypeScript type-safety for all of your content.

Astro provides performant, scalable APIs to load, query, and render content from anywhere: stored locally in your project, hosted remotely, or fetched live from frequently-updating sources.

A content collection is a set of related, structurally identical data. This data can be stored in one or several files locally (e.g. a folder of individual Markdown files of blog posts, a single JSON file of product descriptions) or fetched from remote sources such as a database, CMS, or API endpoint. Each member of the collection is called an entry.

  • Ordnersrc/
  • Ordnernewsletter/ the “newsletter” collection
    • week-1.md a collection entry
    • week-2.md a collection entry
    • week-3.md a collection entry
  • Ordnerauthors/ the “author” collection
    • authors.json a single file containing all collection entries

Collections are defined by the location and shape of its entries and provide a convenient way to query and render your content and associated metadata. You can create a collection any time you have a group of related data or content, stored in the same location, that shares a common structure.

Two types of content collections are available to allow you to work with data fetched either at build time or at request time. Both build-time collections and live updating collections use:

  • A required loader to retrieve your content and metadata from wherever it is stored and make it available to your project through content-focused APIs.
  • An optional collection schema that allows you to define the expected shape of each entry for type safety, autocomplete, and validation in your editor.

Collections stored locally in your project or on your filesystem can use one of Astro’s provided build-time loaders to fetch data from Markdown, MDX, Markdoc, YAML, TOML, or JSON files. Point Astro to the location of your content, define your data shape, and you’re good to go with a blog or similarly content-heavy, mostly static site in no time!

With community-built loaders or by building a custom build-time collection loader or live loader yourself, you can fetch remote data from any external source, such as a CMS, database, or headless payment system, either at build time or live on demand.

Build-time content collections are updated at build time, and data is saved to a storage layer. This provides excellent performance for most content, but may not be suitable for frequently updating data sources requiring up-to-the-moment data freshness, such as live stock prices.

For the best performance and scalability, use build-time content collections when one or more of these is true:

  • Performance is critical and you want to prerender data at build time.
  • Your data is relatively static (e.g., blog posts, documentation, product descriptions).
  • You want to benefit from build-time optimization and caching.
  • You need to process MDX or perform image optimization.
  • Your data can be fetched once and reused across multiple builds.

Live content collections fetch their data at runtime rather than build time. This allows you to access frequently updated data from CMSs, APIs, databases, or other sources using a unified API, without needing to rebuild your site when the data changes. However, this can come at a performance cost since data is fetched at each request and returned directly with no data store persistence.

Live content collections are designed for data that changes frequently and needs to be up-to-date when a page is requested. Consider using them when one or more of these is true:

  • You need real-time information (e.g. user-specific data, current stock levels).
  • You want to avoid constant rebuilds for content that changes often.
  • Your data updates frequently (e.g. up-to-the-minute product inventory, prices, availability).
  • You need to pass dynamic filters to your data source based on user input or request parameters.
  • You’re building preview functionality for a CMS where editors need to see draft content immediately.

Both kinds of collections can exist in the same project, so you can always choose the best type of collection for each individual data source. For example, a build-time collection can manage product descriptions, while a live collection can manage content inventory.

Both types of collections use similar APIs (e.g. getCollection() and getLiveCollection()), so that working with collections will feel familiar no matter which one you choose, while still ensuring that you always know which type of collection you are working with.

We suggest using build-time content collections whenever possible, and using live collections when your content needs updating in real time and the performance tradeoffs are acceptable. Additionally, live content collections have some limitations compared to build-time collections:

  • No MDX support: MDX cannot be rendered at runtime
  • No image optimization: Images cannot be processed at runtime
  • Performance considerations: Data is fetched on each request (unless cached)
  • No data store persistence: Data is not saved to the content layer data store

Define your data as a collection when:

  • You have multiple files or data to organize that share the same overall structure (e.g. a directory of blog posts written in Markdown which all have the same frontmatter properties).
  • You have existing content stored remotely, such as in a CMS, and want to take advantage of the collections helper functions instead of using fetch() or SDKs.
  • You need to fetch (tens of) thousands of related pieces of data at build time, and need a querying and caching method that handles at scale.

Much of the benefit of using collections comes from:

  • Defining a common data shape to validate that an individual entry is “correct” or “complete”, avoiding errors in production.
  • Content-focused APIs designed to make querying intuitive (e.g. getCollection() instead of import.meta.glob()) when importing and rendering content on your pages.
  • Access to both built-in loaders and access to the low-level Content Loader API for retrieving your content. There are additionally several third-party and community-built loaders available, and you can build your own custom loader to fetch data from anywhere.
  • Performance and scalability. Build-time content collections data can be cached between builds and is suitable for tens of thousands of content entries.

Collections provide excellent structure, safety, and organization when you have multiple pieces of content that must share the same properties.

Collections may not be your solution if:

  • You have only one or a small number of different content pages. Consider making individual page components such as src/pages/about.astro with your content directly instead.
  • You are displaying files that are not processed by Astro, such as PDFs. Place these static assets in the public/ directory of your project instead.
  • Your data source has its own SDK/client library for imports that is incompatible with or does not offer a content loader, and you prefer to use it directly.

Content collections rely on TypeScript to provide Zod validation, Intellisense, and type checking in your editor. By default, Astro configures a strict TypeScript template when you create a new project using the create astro CLI command. Both of Astro’s strict and strictest templates include the TypeScript settings your project needs for content collections.

If you changed this setting to base because you are not writing TypeScript in your project, or are not using any of Astro’s built-in templates, you will need to also add the following compilerOptions in your tsconfig.json to use content collections:

tsconfig.json
{
"extends": "astro/tsconfigs/base",
// not needed for `strict` or `strictest`
"compilerOptions": {
"strictNullChecks": true,
"allowJs": true
}
}

All of your build-time content collections are defined in a special src/content.config.ts file (.js and .mjs extensions are also supported) using defineCollection(), and then a single collections object is exported for use in your project.

Each individual collection configures:

src/content.config.ts
// 1. Import utilities from `astro:content`
import { defineCollection } from 'astro:content';
// 2. Import loader(s)
import { glob, file } from 'astro/loaders';
// 3. Import Zod
import { z } from 'astro/zod';
// 4. Define a `loader` and `schema` for each collection
const blog = defineCollection({
loader: glob({ base: './src/content/blog', pattern: '**/*.{md,mdx}' }),
schema: z.object({
title: z.string(),
description: z.string(),
pubDate: z.coerce.date(),
updatedDate: z.coerce.date().optional(),
}),
});
// 5. Export a single `collections` object to register your collection(s)
export const collections = { blog };

You can then use the dedicated getCollection() and getEntry() functions to query your content collections data and render your content.

You can choose to generate page routes from your build-time collection entries at build time for an entirely static, prerendered site. Or, you can render your build-time collections on demand, choosing to delay building your page until it is first requested. This is useful when you have a large number of pages (e.g. thousands or tens of thousands) and want to delay building a static page until it is needed.

Astro provides two built-in loaders (glob() and file()) for fetching your local content at build time. Pass the location of your data in your project or on your filesystem, and these loaders will automatically handle your data and update the persistent data store content layer.

To fetch remote data at build time, you can build a custom loader to retrieve your data and update the data store. Or, you can use any third-party or community-published loader integration. Several already exist for popular content management systems as well as common data sources such as Obsidian vaults, GitHub repositories, or Bluesky posts.

The glob() loader fetches entries from directories of Markdown, MDX, Markdoc, JSON, YAML, or TOML files from anywhere on the filesystem. If you store your content entries locally as separate files, such as a directory of blog posts, then the glob() loader is all you need to access your content.

This loader requires a pattern of entry files to match using glob patterns supported by micromatch, and a base file path of where your files are located. A unique id for each entry will be automatically generated from its file name, but you can define custom IDs if needed.

src/content.config.ts
import { defineCollection } from 'astro:content';
import { glob } from 'astro/loaders';
const blog = defineCollection({
loader: glob({ pattern: "**/*.md", base: "./src/data/blog" }),
});
export const collections = { blog };

When using the glob() loader with Markdown, MDX, Markdoc, JSON, or TOML files, every content entry id is automatically generated in an URL-friendly format based on the content filename. This unique id is used to query the entry directly from your collection. It is also useful when creating new pages and URLs from your content.

You can override a single entry’s generated id by adding your own slug property to the file frontmatter or data object for JSON files. This is similar to the “permalink” feature of other web frameworks.

src/blog/1.md
---
title: My Blog Post
slug: my-custom-id/supports/slashes
---
Your blog post content here.
src/categories/1.json
{
"title": "My Category",
"slug": "my-custom-id/supports/slashes",
"description": "Your category description here."
}

You can also pass options to the glob() loader’s generateID() helper function when you define your build-time collection to adjust how ids are generated. For example, you may wish to revert the default behavior of converting uppercase letters to lowercase for each collection entry:

src/content.config.ts
const authors = defineCollection({
/* Retrieve all JSON files in your authors directory while retaining
* uppercase letters in the ID. */
loader: glob({
pattern: '**/*.json',
base: "./src/data/authors",
generateId: ({ entry }) => entry.replace(/\.json$/, ''),
}),
});

The file() loader fetches multiple entries from a single local file defined in your collection. The file() loader will automatically detect and parse (based on the file extension) a single array of objects from JSON and YAML files, and will treat each top-level table as an independent entry in TOML files.

src/content.config.ts
import { defineCollection } from 'astro:content';
import { file } from 'astro/loaders';
const dogs = defineCollection({
loader: file("src/data/dogs.json"),
});
export const collections = { dogs };

Each entry object in the file must have a unique id key property so that the entry can be identified and queried. Unlike the glob() loader, the file() loader will not automatically generate IDs for each entry.

You can provide your entries as an array of objects with an id property, or in object form where the unique id is the key:

src/data/dogs.json
// Specify an `id` property in each object of an array
[
{ "id": "poodle", "coat": "curly", "shedding": "low" },
{ "id": "afghan", "coat": "short", "shedding": "low" }
]
src/data/dogs.json
// Each key will be used as the `id`
{
"poodle": { "coat": "curly", "shedding": "low" },
"afghan": { "coat": "silky", "shedding": "low" }
}

Support for parsing single JSON, YAML, and TOML files into collection entries withe the file() loader is built-in (unless you have a nested JSON document). To load your collection from unsupported file types, such as .csv, you will need to create a parser function.

The following example shows importing a third-party CSV parser then passing a custom parser function to the file() loader:

src/content.config.ts
import { defineCollection } from "astro:content";
import { file } from "astro/loaders";
import { parse as parseCsv } from "csv-parse/sync";
const cats = defineCollection({
loader: file("src/data/cats.csv", {
parser: (text) => parseCsv(text, { columns: true, skipEmptyLines: true }),
}),
});

The parser() argument can be used to load a single collection from a nested JSON document. For example, this JSON file contains multiple collections:

src/data/pets.json
{"dogs": [{}], "cats": [{}]}

You can separate these collections by passing a custom parser() function to the file() loader for each collection, using Astro’s built-in JSON parsing:

src/content.config.ts
import { file } from "astro/loaders";
import { defineCollection } from "astro:content";
const dogs = defineCollection({
loader: file("src/data/pets.json", { parser: (text) => JSON.parse(text).dogs })
});
const cats = defineCollection({
loader: file("src/data/pets.json", { parser: (text) => JSON.parse(text).cats })
});

You can build a custom loader using the Content Loader API to fetch remote content from any data source, such as a CMS, a database, or an API endpoint.

Then you can import and define your custom loader in your collections config, passing any required values:

src/content.config.ts
import { defineCollection } from 'astro:content';
import { myLoader } from './loader.ts';
const blog = defineCollection({
loader: myLoader({
url: "https://api.example.com/posts",
apiKey: "my-secret",
}),
});

Using a custom loader to fetch your data will automatically create a collection from your remote data. This gives you all the benefits of local collections, including collection-specific API helpers such as getCollection() and render() to query and display your data, as well as schema validation.

Similar to creating an Astro integration or Vite plugin, you can distribute your loader as an npm package that others can use in their projects.

See the full Content Loader API for examples of how to build your own loader.

Schemas enforce consistent frontmatter or entry data within a collection through Zod validation. A schema guarantees that this data exists in a predictable form when you need to reference or query it. If any file violates its collection schema, Astro will provide a helpful error to let you know.

Schemas also power Astro’s automatic TypeScript typings for your content. When you define a schema for your collection, Astro will automatically generate and apply a TypeScript interface to it. The result is full TypeScript support when you query your collection, including property autocompletion and type-checking.

Providing a schema is optional, but highly recommended! If you choose to use a schema, then every frontmatter or data property of your collection entries must be defined using a Zod data type:

src/content.config.ts
import { defineCollection } from 'astro:content';
import { z } from 'astro/zod';
import { glob, file } from 'astro/loaders';
const blog = defineCollection({
loader: glob({ pattern: "**/*.md", base: "./src/data/blog" }),
schema: z.object({
title: z.string(),
description: z.string(),
pubDate: z.coerce.date(),
updatedDate: z.coerce.date().optional(),
})
});
const dogs = defineCollection({
loader: file("src/data/dogs.json"),
schema: z.object({
id: z.string(),
breed: z.string(),
temperament: z.array(z.string()),
}),
});
export const collections = { blog, dogs };

Astro uses Zod to power its content schemas. With Zod, Astro is able to validate every file’s data within a collection and provide automatic TypeScript types when you query content from inside your project.

To use Zod in Astro, import the z utility from "astro/zod". This is a re-export of the Zod library, and it supports all of the features of Zod 4.

See the z utility reference for a cheatsheet of common datatypes and to learn how Zod works and what features are available.

All Zod schema methods (e.g. .parse(), .transform()) are available, with some limitations. Notably, performing custom validation checks on images using image().refine() is unsupported.

Collection entries can also “reference” other related entries.

With the reference() function, you can define a property in a collection schema as an entry from another collection. For example, you can require that every space-shuttle entry includes a pilot property which uses the pilot collection’s own schema for type checking, autocomplete, and validation.

A common example is a blog post that references reusable author profiles stored as JSON, or related post URLs stored in the same collection:

src/content.config.ts
import { defineCollection, reference } from 'astro:content';
import { glob } from 'astro/loaders';
import { z } from 'astro/zod';
const blog = defineCollection({
loader: glob({ base: './src/content/blog', pattern: '**/*.{md,mdx}' }),
schema: z.object({
title: z.string(),
// Reference a single author from the `authors` collection by `id`
author: reference('authors'),
// Reference an array of related posts from the `blog` collection by `id`
relatedPosts: z.array(reference('blog')),
})
});
const authors = defineCollection({
loader: glob({ pattern: '**/*.json', base: "./src/data/authors" }),
schema: z.object({
name: z.string(),
portfolio: z.url(),
})
});
export const collections = { blog, authors };

This example blog post specifies the ids of related posts and the id of the post author:

src/data/blog/welcome.md
---
title: "Welcome to my blog"
author: ben-holmes # references `src/data/authors/ben-holmes.json`
relatedPosts:
- about-me # references `src/data/blog/about-me.md`
- my-year-in-review # references `src/data/blog/my-year-in-review.md`
---

These references will be transformed into objects containing a collection key and an id key, allowing you to easily query them in your templates.

Astro provides helper functions to query a build-time collection and return one or more content entries.

  • getCollection() fetches an entire collection and returns an array of entries.
  • getEntry() fetches a single entry from a collection.

These return entries with a unique id, a data object with all defined properties, and will also return a body containing the raw, uncompiled body of a Markdown, MDX, or Markdoc document.

src/pages/index.astro
---
import { getCollection, getEntry } from 'astro:content';
// Get all entries from a collection.
// Requires the name of the collection as an argument.
const allBlogPosts = await getCollection('blog');
// Get a single entry from a collection.
// Requires the name of the collection and `id`
const poodleData = await getEntry('dogs', 'poodle');
---

The sort order of generated collections is non-deterministic and platform-dependent. This means that if you are calling getCollection() and need your entries returned in a specific order (e.g. blog posts sorted by date), you must sort the collection entries yourself:

src/pages/blog.astro
---
import { getCollection } from 'astro:content';
const posts = (await getCollection('blog')).sort(
(a, b) => b.data.pubDate.valueOf() - a.data.pubDate.valueOf(),
);
---
See the full list of properties returned by the CollectionEntry type.

After querying your collections, you can access each entry’s content and metadata directly inside of your Astro component template.

For example, you can create a list of links to your blog posts, displaying information from your entry’s frontmatter using the data property:

src/pages/index.astro
---
import { getCollection } from 'astro:content';
const posts = await getCollection('blog');
---
<h1>My posts</h1>
<ul>
{posts.map(post => (
<li><a href={`/blog/${post.id}`}>{post.data.title}</a></li>
))}
</ul>

Once queried, you can render Markdown and MDX entries to HTML using the render() function from astro:content. Calling this function gives you access to rendered HTML content, including both a <Content /> component and a list of all rendered headings.

src/pages/blog/post-1.astro
---
import { getEntry, render } from 'astro:content';
const entry = await getEntry('blog', 'post-1');
const { Content } = await render(entry);
---
<h1>{entry.data.title}</h1>
<p>Published on: {entry.data.published.toDateString()}</p>
<Content />
When working with MDX entries, you can also pass your own components to <Content /> to replace HTML elements with custom alternatives.

A component can also pass an entire collection entry as a prop.

You can use the CollectionEntry utility to correctly type your component’s props using TypeScript. This utility takes a string argument that matches the name of your collection schema and will inherit all of the properties of that collection’s schema.

src/components/BlogCard.astro
---
import type { CollectionEntry } from 'astro:content';
interface Props {
post: CollectionEntry<'blog'>;
}
// `post` will match your 'blog' collection schema type
const { post } = Astro.props;
---

getCollection() takes an optional “filter” callback that allows you to filter your query based on an entry’s id or data properties.

You can use this to filter by any content criteria you like. For example, you can filter by properties like draft to prevent any draft blog posts from publishing to your blog:

src/pages/blog.astro
---
// Example: Filter out content entries with `draft: true`
import { getCollection } from 'astro:content';
const publishedBlogEntries = await getCollection('blog', ({ data }) => {
return data.draft !== true;
});
---

You can also create draft pages that are available when running the dev server, but not built in production:

src/pages/blog.astro
---
// Example: Filter out content entries with `draft: true` only when building for production
import { getCollection } from 'astro:content';
const blogEntries = await getCollection('blog', ({ data }) => {
return import.meta.env.PROD ? data.draft !== true : true;
});
---

The filter argument also supports filtering by nested directories within a collection. Since the id includes the full nested path, you can filter by the start of each id to only return items from a specific nested directory:

src/pages/blog.astro
---
// Example: Filter entries by sub-directory in the collection
import { getCollection } from 'astro:content';
const englishDocsEntries = await getCollection('docs', ({ id }) => {
return id.startsWith('en/');
});
---

To access references defined in your schema, first query your collection entry. Your references will be available on the returned data object. (e.g. entry.data.author and entry.data.relatedPosts)

Then, you can use the getEntry() function again (or getEntries() to retrieve multiple referenced entries) by passing those returned values. The reference() function in your schema transforms those values into one or more collection and id objects as a convenient way to query this related data.

src/pages/blog/adventures-in-space.astro
---
import { getEntry, getEntries } from 'astro:content';
// First, query a blog post
const blogPost = await getEntry('blog', 'Adventures in Space');
// Retrieve a single reference item: the blog post's author
// Equivalent to querying `{collection: "authors", id: "ben-holmes"}`
const author = await getEntry(blogPost.data.author);
// Retrieve an array of referenced items: all the related posts
// Equivalent to querying `[{collection: "blog", id: "visiting-mars"}, {collection: "blog", id: "leaving-earth-for-the-first-time"}]`
const relatedPosts = await getEntries(blogPost.data.relatedPosts);
---
<h1>{blogPost.data.title}</h1>
<p>Author: {author.data.name}</p>
<!-- ... -->
<h2>You might also like:</h2>
{relatedPosts.map(post => (
<a href={post.id}>{post.data.title}</a>
))}

Content collections are stored outside of the src/pages/ directory. This means that no pages or routes are generated for your collection items by default by Astro’s file-based routing.

You will need to manually create a new dynamic route if you want to generate HTML pages for each of your collection entries, such as individual blog posts. Your dynamic route will map the incoming request param (e.g. Astro.params.id in src/pages/blog/[...id].astro) to fetch the correct entry for each page.

The exact method for generating routes will depend on whether your pages are prerendered (default) or rendered on demand by a server.

If you are building a static website (Astro’s default behavior) with build-time collections, use the getStaticPaths() function to create multiple pages from a single page component (e.g. src/pages/[id].astro) during your build.

Call getCollection() inside of getStaticPaths() to have your collection data available for building static routes. Then, create the individual URL paths using the id property of each content entry. Each page receives the entire collection entry as a prop for use in your page template.

src/pages/posts/[id].astro
---
import { getCollection, render } from 'astro:content';
// 1. Generate a new path for every collection entry
export async function getStaticPaths() {
const posts = await getCollection('blog');
return posts.map(post => ({
params: { id: post.id },
props: { post },
}));
}
// 2. For your template, you can get the entry directly from the prop
const { post } = Astro.props;
const { Content } = await render(post);
---
<h1>{post.data.title}</h1>
<Content />

This will generate a page route for every entry in the blog collection. For example, an entry at src/blog/hello-world.md will have an id of hello-world, and therefore its final URL will be /posts/hello-world/.

With an adapter installed for on-demand rendering, you can generate your dynamic page routes at request time. First, examine the request (using Astro.request or Astro.params) to find the slug on demand, and then fetch it using one of Astro’s content collection helper functions:

  • getEntry() for build-time collection pages that are generated once, upon first request.
  • getLiveEntry() for live collection pages where data is (re)fetched at each request time.
src/pages/posts/[id].astro
---
export const prerender = false; // Not needed in 'server' mode
import { getEntry, render } from "astro:content";
// 1. Get the slug from the incoming server request
const { id } = Astro.params;
if (id === undefined) {
return Astro.redirect("/404");
}
// 2. Query for the entry directly using the request slug
const post = await getEntry("blog", id);
// 3. Redirect if the entry does not exist
if (post === undefined) {
return Astro.redirect("/404");
}
// 4. Render the entry to HTML in the template
const { Content } = await render(post);
---
<h1>{post.data.title}</h1>
<Content />

Live collections use a different API than build-time content collections, although the configuration and helper functions are designed to feel familiar.

Key differences include:

  1. Execution time: Run at request time instead of build time
  2. Configuration file: Use src/live.config.ts instead of src/content.config.ts
  3. Collection definition: Use defineLiveCollection() instead of defineCollection()
  4. Loader API: Implement loadCollection and loadEntry methods instead of the load method
  5. Data return: Return data directly instead of storing in the data store
  6. User-facing functions: Use getLiveCollection()/getLiveEntry() instead of getCollection()/getEntry()

Additionally, you must have an adapter configured for on-demand rendering of live collection data.

Define your live collections in the special file src/live.config.ts (separate from your src/content.config.ts for build-time collections, if you have one).

Each individual collection configures:

Unlike for build-time collections, there are no built-in live loaders available. You will need to create a custom live loader for your specific data source or find a third-party loader to pass to your live collection’s loader property.

You can optionally include type safety in your live loaders. Therefore, defining a Zod schema for live collections is optional. However, if you provide one, it will take precedence over the live loader’s types.

src/live.config.ts
// Define live collections for accessing real-time data
import { defineLiveCollection } from 'astro:content';
import { storeLoader } from '@mystore/astro-loader';
const products = defineLiveCollection({
loader: storeLoader({
apiKey: process.env.STORE_API_KEY,
endpoint: 'https://api.mystore.com/v1',
}),
});
// Export a single `collections` object to register your collection(s)
export const collections = { products };

You can then use the dedicated getLiveCollection() and getLiveEntry() functions to access your live data and render your content.

You can generate page routes from your live collection entries on demand, fetching your data fresh at runtime upon each request without needing a rebuild of your site like build-time collections do. This is useful when accessing live, up-to-the-moment data is more important than having your content available in a performant data storage layer that persists between site builds.

You can build a custom live loader using the Live Loader API to fetch remote content fresh upon request from any data source, such as a CMS, a database or an API endpoint. You will have to tell your live loader how to fetch and return content entries from your desired data source, as well as provide error handling for unsuccessful data requests.

Using a live loader to fetch your data will automatically create a collection from your remote data. This gives you all the benefits of Astro’s content collections, including collection-specific API helpers such as getLiveCollection() and render() to query and display your data, as well as helpful error handling.

See the basics of building a live loader using the Live Loader API

You can use Zod schemas with live collections to validate and transform data at runtime. This Zod validation works the same way as schemas for build-time collections.

When you define a schema for a live collection, it takes precedence over the live loader’s types when you query the collection:

src/live.config.ts
import { z, defineLiveCollection } from 'astro:content';
import { apiLoader } from './loaders/api-loader';
const products = defineLiveCollection({
loader: apiLoader({ endpoint: process.env.API_URL }),
schema: z
.object({
id: z.string(),
name: z.string(),
price: z.number(),
// Transform the API's category format
category: z.string().transform((str) => str.toLowerCase().replace(/\s+/g, '-')),
// Coerce the date to a Date object
createdAt: z.coerce.date(),
})
.transform((data) => ({
...data,
// Add a formatted price field
displayPrice: `$${data.price.toFixed(2)}`,
})),
});
export const collections = { products };

When using Zod schemas with live collections, validation errors are automatically caught and returned as AstroError objects:

src/pages/store/index.astro
---
export const prerender = false; // Not needed in 'server' mode
import { LiveCollectionValidationError } from 'astro/content/runtime';
import { getLiveEntry } from 'astro:content';
const { entry, error } = await getLiveEntry('products', '123');
// You can handle validation errors specifically
if (LiveCollectionValidationError.is(error)) {
console.error(error.message);
return Astro.rewrite('/500');
}
// TypeScript knows entry.data matches your Zod schema, not the loader's type
console.log(entry?.data.displayPrice); // e.g., "$29.99"
---
See Zod’s README for complete documentation on how Zod works and what features are available.

Astro provides live collection helper functions to access live data on each request and return one (or more) content entries. These can be used similarly to their build-time collection counterparts.

These return entries with a unique id, and data object with all defined properties from the live loader. When using third-party or community loaders distributed as npm packages, check their own documentation for the expected shape of data returned.

You can use these functions to access your live data, passing the name of the collection and optionally filtering conditions.

src/pages/store/[slug].astro
---
export const prerender = false; // Not needed in 'server' mode
import { getLiveCollection, getLiveEntry } from 'astro:content';
// Use loader-specific filters
const { entries: draftArticles } = await getLiveCollection('articles', {
status: 'draft',
author: 'john-doe',
});
// Get a specific product by ID
const { entry: product } = await getLiveEntry('products', Astro.params.slug);
---

If your live loader returns a rendered property, you can use the render() function and <Content /> component to render your content directly in your pages, using the same method as build-time collections.

You also have access to any error returned by the live loader, for example, to rewrite to a 404 page when content cannot be displayed:

src/pages/store/[id].astro
---
export const prerender = false; // Not needed in 'server' mode
import { getLiveEntry, render } from 'astro:content';
const { entry, error } = await getLiveEntry('articles', Astro.params.id);
if (error) {
return Astro.rewrite('/404');
}
const { Content } = await render(entry);
---
<h1>{entry.data.title}</h1>
<Content />

Live loaders can fail due to network issues, API errors, or validation problems. The API is designed to make error handling explicit.

When you call getLiveCollection() or getLiveEntry(), the error will be one of:

  • The error type defined by the loader (if it returned an error)
  • A LiveEntryNotFoundError if the entry was not found
  • A LiveCollectionValidationError if the collection data does not match the expected schema
  • A LiveCollectionCacheHintError if the cache hint is invalid
  • A LiveCollectionError for other errors, such as uncaught errors thrown in the loader

You can use instanceof to check the type of an error at runtime:

src/pages/store/[id].astro
---
export const prerender = false; // Not needed in 'server' mode
import { LiveEntryNotFoundError } from 'astro/content/runtime';
import { getLiveEntry } from 'astro:content';
const { entry, error } = await getLiveEntry('products', Astro.params.id);
if (error) {
if (error instanceof LiveEntryNotFoundError) {
console.error(`Product not found: ${error.message}`);
Astro.response.status = 404;
} else {
console.error(`Error loading product: ${error.message}`);
return Astro.redirect('/500');
}
}
---

Hinzugefügt in: astro@4.13.0

Astro auto-generates JSON Schema files for collections, which you can use in your editor to get IntelliSense and type-checking for data files.

A JSON Schema file is generated for each collection in your project and output to the .astro/collections/ directory. For example, if you have two collections, one named authors and another named posts, Astro will generate .astro/collections/authors.schema.json and .astro/collections/posts.schema.json.

Use JSON Schemas in JSON files

You can manually point to an Astro-generated schema by setting the $schema field in your JSON file. The value should be a relative file path from the data file to the schema. In the following example, a data file in src/data/authors/ uses the schema generated for the authors collection:

src/data/authors/armand.json
{
"$schema": "../../../.astro/collections/authors.schema.json",
"name": "Armand",
"skills": ["Astro", "Starlight"]
}

Use a schema for a group of JSON files in VS Code

In VS Code, you can configure a schema to apply to all files in a collection using the json.schemas setting. In the following example, all files in the src/data/authors/ directory will use the schema generated for the authors collection:

{
"json.schemas": [
{
"fileMatch": ["/src/data/authors/**"],
"url": "./.astro/collections/authors.schema.json"
}
]
}

Use schemas in YAML files in VS Code

In VS Code, you can add support for using JSON schemas in YAML files using the Red Hat YAML extension. With this extension installed, you can reference a schema in a YAML file using a special comment syntax:

src/data/authors/armand.yml
# yaml-language-server: $schema=../../../.astro/collections/authors.schema.json
name: Armand
skills:
- Astro
- Starlight

Use schemas for a group of YAML files in VS Code

With the Red Hat YAML extension, you can configure a schema to apply to all YAML files in a collection using the yaml.schemas setting. In the following example, all YAML files in the src/data/authors/ directory will use the schema generated for the authors collection:

{
"yaml.schemas": {
"./.astro/collections/authors.schema.json": ["/src/content/authors/*.yml"]
}
}

See “Associating schemas” in the Red Hat YAML extension documentation for more details.

Wirke mit Community Sponsor