Skip to content

Conversation

Copy link

Copilot AI commented Jan 14, 2026

What this PR does / why we need it:

Adds blog post documenting the 10x latency improvements from Native Python Mode transformations and write-time transformations for On Demand Feature Views.

Changes

  • Blog post: Created infra/website/docs/blog/feature-transformation-latency.md covering:

    • Native Python Mode for singleton/list-based transformations (10x faster than Pandas)
    • Write-time transformations via write_to_online_store parameter for reduced online latency
    • Code examples and future roadmap for unified transformation API
  • Latency comparison image: Added infra/website/public/images/blog/latency.png (174KB) showing Pandas vs Native Python latency decomposition

  • Build hygiene: Added infra/website/dist/ to .gitignore to exclude build artifacts

Build Verification

✅ Build completed successfully with npm run build

  • All 571 modules transformed
  • Blog post generates at /blog/feature-transformation-latency/index.html
  • Image correctly references /images/blog/latency.png

Blog post rendering

Which issue(s) this PR fixes:

Addresses issue requesting addition of transformation latency blog post to feast.dev.

Misc

Co-authored-by: Francisco Javier Arceo and Shuchu Han (content attribution)

Original prompt

This section details on the original issue you should resolve

<issue_title>Add Feature Transformation Latency Blog Post to Website</issue_title>
<issue_description>Is your feature request related to a problem? Please describe.
Add this blog post to feast.dev site:

Faster On Demand Transformations in Feast 🏎️💨

Thank you to Shuchu Han and Francisco Javier Arceo for their contributions evaluating the latency of transformation for On Demand Feature Views and adding transformations on writes to On Demand Feature Views. Thank you also to Maks Stachowiak, Ross Briden, Ankit Nadig, and the folks at Affirm for inspiring this work and creating an initial proof of concept.

Feature engineering is at the core of building high-performance machine learning models. The Feast team has introduced two major enhancements to On Demand Feature Views (ODFVs), pushing the boundaries of efficiency and flexibility for data scientists and engineers. Here's a closer look at these exciting updates:

1. Turbocharging Transformations with Native Python Mode

Traditionally, transformations in ODFVs were limited to Pandas-based operations. While powerful, Pandas transformations can be computationally expensive for certain use cases. Feast now introduces Native Python Mode, a feature that allows users to write transformations using pure Python.

Key benefits of Native Python Mode include:

  • Blazing Speed: Transformations using Native Python are nearly 10x faster compared to Pandas for many operations.
  • Intuitive Design: This mode supports list-based and singleton (row-level) transformations, making it easier for data scientists to think in terms of individual rows rather than entire datasets.
  • Versatility: Users can now switch between batch and singleton transformations effortlessly, catering to both historical and online retrieval scenarios.

![][image1]

Here’s an example of using Native Python Mode for singleton transformations:

@on_demand_feature_view(� sources=[driver_hourly_stats_view, input_request],� schema=[Field(name="conv_rate_plus_acc_singleton", dtype=Float64)],� mode="python",� singleton=True,�)�def transformed_conv_rate_singleton(inputs: Dict[str, Any]) -> Dict[str, Any]:� return {"conv_rate_plus_acc_singleton": inputs["conv_rate"] + inputs["acc_rate"]}

This approach aligns with how many data scientists naturally process data, simplifying the implementation of feature engineering workflows.

2. Transformations on Writes: A New Dimension of Latency Optimization

Until now, ODFVs operated solely as transformations on reads, applying logic during online feature retrieval. While this ensured flexibility, it sometimes came at the cost of increased latency during retrieval. Feast now supports transformations on writes, enabling users to apply transformations during data ingestion and store the transformed features in the online store.

Why does this matter?

  • Reduced Online Latency: With transformations pre-applied at ingestion, online retrieval becomes a straightforward lookup, significantly improving performance for latency-sensitive applications.
  • Operational Flexibility: By toggling the write_to_online_store parameter, users can choose whether transformations should occur at write time (to optimize reads) or at read time (to preserve data freshness).

Here’s an example of applying transformations during ingestion:

@on_demand_feature_view(� sources=[driver_hourly_stats_view],� schema=[Field(name="conv_rate_adjusted", dtype=Float64)],� mode="pandas",� write_to_online_store=True, # Apply transformation during write time�)�def transformed_conv_rate(features_df: pd.DataFrame) -> pd.DataFrame:� df = pd.DataFrame()� df["conv_rate_adjusted"] = features_df["conv_rate"] * 1.1� return df

With this new capability, data engineers can optimize online retrieval performance without sacrificing the flexibility of on-demand transformations.

The Future of ODFVs and Feature Transformations

These enhancements bring ODFVs closer to the goal of seamless feature engineering at scale. By combining high-speed Python-based transformations with the ability to optimize retrieval latency, Feast empowers teams to build more efficient, responsive, and production-ready feature pipelines.

For more detailed examples and use cases, check out the documentation for On Demand Feature Views. Whether you’re a data scientist prototyping features or an engineer optimizing a production system, the new ODFV capabilities offer the tools you need to succeed.

The future of Feature Transformations in Feast will be to ...


💬 We'd love your input! Share your thoughts on Copilot coding agent in our 2 minute survey.

Copilot AI changed the title [WIP] Add feature transformation latency blog post to website docs: Add feature transformation latency blog post Jan 14, 2026
@franciscojavierarceo
Copy link
Member

@copilot can you run npm build to make sure the static content renders as expected?

Copy link
Author

Copilot AI commented Jan 14, 2026

@copilot can you run npm build to make sure the static content renders as expected?

Build completed successfully! ✅

The blog post renders correctly with the latency comparison image. Here's what was verified:

Build output:

  • ✓ All 571 modules transformed
  • ✓ Static routes generated including /blog/feature-transformation-latency/index.html
  • ✓ Blog post appears in the listing at /blog/index.html
  • ✓ Image reference /images/blog/latency.png (174KB) is correctly linked

Rendered content:
Blog post screenshot

The hero image shows the latency decomposition comparison between Pandas and Native Python, and all content sections render properly with code examples and formatting intact.

Warning

Firewall rules blocked me from connecting to one or more addresses (expand for details)

I tried to connect to the following addresses, but was blocked by firewall rules:

  • telemetry.astro.build
    • Triggering command: /usr/local/bin/node node /home/REDACTED/work/feast/feast/infra/website/node_modules/.bin/astro build (dns block)

If you need me to access, download, or install something from one of these locations, you can either:

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add Feature Transformation Latency Blog Post to Website

2 participants