Get hands-on experience with 20+ free Google Cloud products and $300 in free credit for new customers.

How to Implement Scheduled Delta Updates from BigQuery to Neo4j?

Hi all,I'm exploring a strategy to optimize Neo4j graph updates by avoiding full reloads and instead focusing on delta migrations.

Goal:

To avoid full reloads and instead perform scheduled delta updates (only modified or new records) from BigQuery to Neo4j.

Proposed Flow or any another which will provide exact result:

BigQuery (delta records based on timestamp)

Scheduled Script (Python / Cloud Function via Cloud Scheduler)

Neo4j (MERGE/UPDATE only delta nodes and relationships)

Specific Questions:

  1. Delta detection from BigQuery:

    • Is using a last_updated_timestamp column the best practice?

  2. Scheduling & Automation:

    • What’s the most reliable pattern in GCP for automating scheduled delta syncs?
      (e.g., Cloud Scheduler → Pub/Sub → Cloud Function / Cloud Run)

    • Dataflow Flex Template for BigQuery to Neo4j
      https://neo4j.com/docs/dataflow-bigquery/current/
      can we update delta records using above template?
  3. Recommended Tools / Templates:

    • Are there any open-source tools, libraries, or architectural blueprints for incremental sync from BigQuery to Neo4j?

0 1 179
1 REPLY 1

To keep Neo4j in step with BigQuery, work off a last_updated_timestamp column. That way, only fresh or edited rows need pulling, and the pattern is simple enough for most teams to trust.

After that, automate the data sync. Set up Cloud Scheduler, link it to Pub/Sub, and call a Cloud Function or Cloud Run job. The code should pull the deltas, then push them into Neo4j. 

Use Cypher MERGE to either create or update nodes and relationships:

MERGE (n:Label {id: $id})

SET n += $props

This ensures:

  • A node is created if it doesn’t exist.
  • If it exists, it’s updated with the latest properties.

For relationships:

MATCH (a:LabelA {id: $a_id}), (b:LabelB {id: $b_id})

MERGE (a)-[r:REL_TYPE]->(b)

SET r += $rel_props

You can also tweak the built-in Neo4j Dataflow template. Swap in your own SQL and adjust the mapping, and it will work with deltas.

If your use case leans toward analytical modeling, not graph relationships (e.g., building dashboards or machine learning pipelines), consider whether syncing to Neo4j is necessary.

ETL Platforms like Windsor.ai offer prebuilt BigQuery and Databricks connectors for versioned, incremental pipelines no code, with full observability and lineage. While not for graph updates, this helps in ETL pipelines where you're focused on metrics, trends, and batch processing across platforms like Snowflake, Databricks, or BigQuery.