- 04 Apr 2025
- 32 Minutes to read
- Print
- PDF
Triggers
- Updated on 04 Apr 2025
- 32 Minutes to read
- Print
- PDF
Triggers
Triggers specify the events to monitor in order to execute the action. Trigger events can be initiated within applications like Salesforce or Jira when specific actions occur, such as creating a new contact or updating an existing ticket. They can also be triggered when a new line is added to a file or based on a predefined schedule, executing at a set time or interval.
Integration Hub receives trigger events in real-time, depending on the available API, or periodically check for events by polling the application.
Triggers are categorized based on when they detect new events (trigger mechanism) and how they organize or process these events (trigger dispatch).
Figure: Trigger mechanism
Trigger behavior
Recipes capture and queue trigger events in sequence for processing as recipe jobs. The recipe tracks its position using a cursor and advances through the trigger event queue synchronously, with configurable throughput. Integration Hub ensures no job duplication by maintaining a record of processed trigger events.
Figure: Trigger events are queued and processed by the Recipe as jobs
Integration Hub triggers operate with the following characteristics:
In-sequence delivery
Triggers are processed in chronological order, ensuring that the oldest records are handled first or in the sequence they are received by Integration Hub.
Durable cursor position
Recipes remember the jobs it has processed even across stopped and running states. Whenever a recipe is started, it will pick up where it stopped and begin processing all trigger events since it was stopped.
Figure: When the Recipe is stopped at 10/21/2017, 11.30 am and started again days or weeks later, it picks up trigger events from when it is stopped at 10/21/2017, 11.30 am
No duplication
Each recipe maintains a record of the trigger events it has seen and will not process duplicate events.
Flow control
Recipes process trigger events synchronously, for example, only process a second job when the first job has been completed. For moving large volumes of data, you can maximize data throughput with batch processing and running multiple concurrent jobs.
Guaranteed delivery
For Integration Hub polling triggers, Integration Hub guarantees trigger event delivery. Even if servers experience temporary downtime, or if the network is unstable, Integration Hub ensures that triggers are picked up and processed in-sequence.
Webhook events, which powers most real-time Integration Hub triggers, inherently have the possibility of being lost. To mitigate this, most Integration Hub-built real-time triggers (a notable exception is the HTTP webhook trigger) have a backup polling mechanism that ensures missed webhook trigger events will be picked up.
Trigger Mechanisms
Trigger mechanisms determine when a trigger will fire. In this section, we cover polling triggers, real-time triggers and scheduled triggers.
Polling triggers
Polling triggers periodically check for new events by querying the app at intervals based on your Integration Hub plan, which can be as frequent as every five minutes. Each poll may retrieve multiple events, potentially creating several jobs from a single poll.
When you first start a recipe, the polling trigger collects events from the date specified in the pick up events from field. For instance, a recipe might fetch all NetSuite customers created or updated since January 1, 2025, at 10:00 AM PST. After the initial fetch, the recipe continues to poll at regular intervals according to your plan. For example, every five minutes, it would retrieve new NetSuite customers created or updated in the last five minutes.
When you stop the recipe, polling triggers cease to fetch new events. If you restart the recipe, polling triggers retrieve all events that occurred while the recipe was stopped.
Example: Polling trigger
A Jira issue creation has a five-minute polling interval, as indicated in their polling triggers.
Figure: Trigger poll interval
For instance, if a recipe is configured to poll every five minutes for new Jira issues, it retrieves any issues created during each polling cycle. A single poll may capture multiple newly created accounts, leading to the creation of multiple jobs.
If the recipe is stopped on February 1, 2025, at 12:00 AM PST, it will no longer fetch new trigger events. Upon restarting the recipe on March 10, 2025, Integration Hub will retrieve all Salesforce accounts created since February 1, 2025.
Real-time triggers
Real-time triggers are typically built on asynchronous notification mechanisms and require registration in the connected app, either via API or manually through the app interface to indicate interest in a specific event. When the event occurs, the app sends a notification to Integration Hub, generating a trigger event.
Webhooks are a common mechanism for real-time triggers, and most real-time triggers in Integration Hub are webhook-based. The key advantage of webhooks is their efficiency notifications are received instantly when an event occurs, eliminating the need for Integration Hub to check for new events at regular intervals.
Real-time triggers in Integration Hub (excluding HTTP real-time triggers) generally rely on webhooks supplemented by periodic polling. While traditional polling triggers might check for events every five minutes, real-time triggers use longer polling intervals, often around one hour. This polling mechanism also enables users to specify a From date when starting a recipe, ensuring events are captured from a specific point in time.
#
Scheduled triggers
Scheduled triggers run at predefined intervals, such as hourly, daily, monthly, or on specific dates and times.
Figure: Schedule trigger
At the scheduled time or interval, this trigger retrieves all events that meet the specified criteria. Unlike other triggers, scheduled triggers can return events that have already been processed.
Similar to batch triggers, scheduled triggers return events in batches. Users can define the maximum batch size. For example, if the batch size is set to 100 and 420 new events are detected, five jobs will be created: the first four containing 100 events each, and the fifth containing 20 events.
Scheduler Triggers (Clock/Timer)
Scheduler triggers let you define the exact timing for your recipe execution. There are two types of scheduler triggers:
New Scheduled Event: Allows you to set the initial trigger time and specify the intervals for subsequent executions.
New Scheduled Event (Advanced): Let’s you define specific days and times for the recipe to run. If only the minutes field is set (e.g., 30), the recipe will execute 24 times a day, every 30 minutes past the hour. If both hour and minute fields are specified, the recipe will run once per day at the designated time.
Trigger dispatches
Trigger dispatches define whether a trigger returns a single event or a collection of events. This section explores single triggers and batch triggers.
Single Triggers
Single Triggers are ideal for real-time, continuous data synchronization. For instance, they enable seamless movement of opportunities from Salesforce to NetSuite as sales orders immediately after an opportunity is closed. The majority of triggers in Integration Hub function as single triggers, ensuring efficient and instantaneous data transfer.
Batch Triggers
Batch Triggers are designed to enhance throughput by retrieving trigger events in bulk rather than individually. They are particularly useful for handling high volumes of data, such as transferring large amounts of user activity data from Marketo to data warehouses like Redshift.
Similar to polling triggers, batch triggers fetch new events at regular intervals. During configuration, users can define the batch size, allowing for efficient processing of multiple events at once.
Figure: Batch triggers process trigger events in batches of user-specified sizes
Bulk triggers
Bulk triggers are designed for transferring large volumes of data efficiently. They operate by sending records as a CSV stream, enabling seamless data transfer from a source to a destination within Integration Hub. Unlike other triggers, bulk triggers do not allow individual access to records but instead facilitate the movement of large datasets.
This method is particularly useful for exports and high-throughput workflows, such as syncing Jira tasks or updating ServiceNow tables. Bulk triggers play a crucial role in ETL/ELT processes, serving as the primary mechanism for ingesting data from various sources into Integration Hub.
A key advantage of bulk triggers is their ability to process an unlimited number of records in a single job, simplifying tracking and monitoring for high-volume use cases.
Note
Bulk triggers require dedicated users to periodically monitor the completion status of the action. This functionality is not available in test mode.
The When first started, this recipe should pick up events from field allows recipes to retrieve past trigger events from a specified date and time. This ensures that the recipe captures events that occurred before it was activated, rather than only processing new events created after the start time.
For example, if a Salesforce New Object trigger has this field set to August 27, 2024, at 12:00 AM PST, the recipe will fetch all relevant events from that date onward.
#
With this setting, when the recipe starts, it will only check for Jira Issues created after August 27, 2024, at 12:00 AM PST.
With this setting, when the recipe starts, it will only check for Jira Issues created after August 27, 2024, at 12:00 AM PST.
Not all triggers include the "When first started, this recipe should pick up events from" field. For triggers without this option, the starting point for fetching trigger events is predefined, typically as an offset from the recipe's start time. Common default offsets include:
The moment the recipe first starts
One hour before the recipe starts
One day before the recipe starts
This offset is usually specified in the trigger hint for the connector.
The When first started, this recipe should pick up events from value can only be set once and cannot be modified after the recipe has been started for the first time.
Timezone
For triggers without a dedicated Timezone input field, the timezone of the user who creates the recipe is applied. When another user views the recipe, the displayed time adjusts to their timezone, but the underlying value remains based on the creator’s timezone.
This means:
If User A (timezone: -07:00 Pacific Time) creates a recipe, the input time is stored with their timezone.
If User B (timezone: +05:30 Asia/Kolkata) views or deploys the recipe, the time appears in their local timezone but retains the original value set by User A.
The workspace timezone does not affect the "When first started, this recipe should pick up events from" field unless explicitly configured within the connector.
Trigger conditions
Trigger conditions are additional rules that define which trigger events should be processed. For example, you can configure a condition to process only events related to specific user accounts.
Integration Hub evaluates trigger conditions after fetching the trigger events. This means that all new Jira Issues created in the last five minutes are initially retrieved, and then filtered based on the specified user account criteria. Consequently, for a New Jira Issue trigger, only issues associated with the designated user are processed. If a recipe is not picking up certain expected events, trigger conditions may be the reason.
Note: Trigger conditions do not track field changes; they only verify whether the specified condition is met.
To add a trigger condition, enable the Set trigger condition toggle. The trigger data tree will then appear, displaying available variables to define the condition.
Define the trigger condition. For more information on the available conditions you can use, refer to the IF condition article. The above setup ensures that only Trigger Data, Condition and Value match will fetch the Issues.
To add an additional trigger condition, select + and choose from the OR or AND in the picklist. The selected operator will define how all additional trigger conditions will be added.
Define the additional trigger condition. Values are case sensitive and should be exact. Trigger Conditions can be (AND/OR) but not both.
#
Change Data Capture (CDC)
CDC is a method for detecting and tracking modifications in a database. It enables real-time or near-real-time data monitoring and synchronization, eliminating the need for continuous polling.
The main purpose of CDC is to capture and record changes such as inserts, updates, and deletions made to database tables. These changes are then transmitted to downstream systems, data warehouses, or analytics platforms, ensuring that all connected systems remain up to date with the latest data.
How CDC works in Integration Hub?
Integration Hub utilizes triggers to track changes in a specified app or system. It manages Change Data Capture (CDC) by detecting modifications in real-time and sending notifications, enabling seamless data replication and synchronization across multiple systems.
Triggers operate with in-sequence delivery, maintain records of processed jobs, prevent duplicate processing, and ensure that jobs are completed in order. Trigger dispatches can be either single (for real-time data synchronization) or bulk/batch, which enhances throughput when handling large data volumes.
Supported Data Sources for Change Data Capture (CDC)
Integration Hub supports Change Data Capture (CDC) across a variety of data sources, including:
Software as a Service (SaaS) platforms
On-premise systems
Databases such as MySQL, PostgreSQL, and Snowflake
Integration Hub File Storage
Cloud storage services like Amazon S3
Enterprise Resource Planning (ERP) systems
Advanced CDC strategies
To enhance Change Data Capture (CDC), Integration Hub offers advanced strategies for efficient data handling, including filtering, managing large change volumes, and optimizing performance.
Filtering and Conditional Triggers
Advanced CDC techniques leverage filtering and conditional triggers to selectively capture and propagate specific data changes. This ensures precise control over which updates are processed and sent to downstream systems.
Handling Large Volumes of Changes
For high-volume data changes, batch processing and micro-batching can be utilized to efficiently process and transfer records. These techniques help regulate data load while ensuring timely synchronization.
Optimizing Performance
CDC performance can be enhanced using:
Built-in cursor management for tracking high watermarks
Auto-deduplication to prevent redundant processing
In-sequence processing to maintain data integrity
Variable-Speed Data Pipelines
Integration Hub enables variable-speed data pipelines, including:
Near real-time and continuous data streaming
Micro-batches for frequent polling
Batching for scheduled processing
This flexibility allows organizations to tailor data orchestration strategies based on specific business requirements.
Triggers specify the events to monitor in order to execute the action
Trigger events can be initiated within applications like Salesforce or Jira when specific actions occur, such as creating a new contact or updating an existing ticket. They can also be triggered when a new line is added to a file or based on a predefined schedule, executing at a set time or interval.
Integration Hub can receive trigger events in real-time, depending on the available API, or periodically check for events by polling the application.
Triggers are categorized based on when they detect new events (trigger mechanism) and how they organize or process these events (trigger dispatch).
Figure: Trigger mechanism
Trigger behavior
Recipes capture and queue trigger events in sequence for processing as recipe jobs. The recipe tracks its position using a cursor and advances through the trigger event queue synchronously, with configurable throughput. Integration Hub ensures no job duplication by maintaining a record of processed trigger events.
Figure: Trigger events are queued and processed by the Recipe as jobs
Integration Hub triggers operate with the following characteristics:
In-sequence delivery
Triggers are processed in chronological order, ensuring that the oldest records are handled first or in the sequence they are received by Integration Hub.
Durable cursor position
Recipes remember the jobs it has processed even across stopped and running states. Whenever a recipe is started, it will pick up where it stopped and begin processing all trigger events since it was stopped.
Figure: When the Recipe is stopped at 10/21/2017, 11.30 am and started again days or weeks later, it picks up trigger events from when it is stopped at 10/21/2017, 11.30 am
No duplication
Each recipe maintains a record of the trigger events it has seen, and will not process duplicate events.
Flow control
Recipes process trigger events synchronously, for example, only process a second job when the first job has been completed. For moving large volumes of data, you can maximize data throughput with batch processing and running multiple concurrent jobs.
Guaranteed delivery
For Integration Hub polling triggers, Integration Hub guarantees trigger event delivery. Even if servers experience temporary downtime, or if the network is unstable, Integration Hub ensures that triggers are picked up and processed in-sequence.
Webhook events, which powers most real-time Integration Hub triggers, inherently have the possibility of being lost. To mitigate this, most Integration Hub-built real-time triggers (a notable exception is the HTTP webhook trigger) have a backup polling mechanism that ensures missed webhook trigger events will be picked up.
Trigger mechanisms
Trigger mechanisms determine when a trigger will fire. In this section, we cover polling triggers, real-time triggers and scheduled triggers.
Polling triggers
Polling triggers periodically check for new events by querying the app at intervals based on your Integration Hub plan, which can be as frequent as every five minutes. Each poll may retrieve multiple events, potentially creating several jobs from a single poll.
When you first start a recipe, the polling trigger collects events from the date specified in the pick up events from field. For instance, a recipe might fetch all NetSuite customers created or updated since January 1, 2025, at 10:00 AM PST. After the initial fetch, the recipe continues to poll at regular intervals according to your plan. For example, every five minutes, it would retrieve new NetSuite customers created or updated in the last five minutes.
When you stop the recipe, polling triggers cease to fetch new events. If you restart the recipe, polling triggers retrieve all events that occurred while the recipe was stopped.
Example: Polling trigger
A Jira issue creation has a five-minute polling interval, as indicated in their polling triggers.
Figure: Trigger poll interval
For instance, if a recipe is configured to poll every five minutes for new Jira issues, it retrieves any issues created during each polling cycle. A single poll may capture multiple newly created accounts, leading to the creation of multiple jobs.
If the recipe is stopped on February 1, 2025, at 12:00 AM PST, it will no longer fetch new trigger events. Upon restarting the recipe on March 10, 2025, Integration Hub will retrieve all Salesforce accounts created since February 1, 2025.
Real-time triggers
Real-time triggers are typically built on asynchronous notification mechanisms and require registration in the connected app, either via API or manually through the app interface—to indicate interest in a specific event. When the event occurs, the app sends a notification to Integration Hub, generating a trigger event.
Webhooks are a common mechanism for real-time triggers, and most real-time triggers in Integration Hub are webhook-based. The key advantage of webhooks is their efficiency—notifications are received instantly when an event occurs, eliminating the need for Integration Hub to check for new events at regular intervals.
Real-time triggers in Integration Hub (excluding HTTP real-time triggers) generally rely on webhooks supplemented by periodic polling. While traditional polling triggers might check for events every five minutes, real-time triggers use longer polling intervals, often around one hour. This polling mechanism also enables users to specify a From date when starting a recipe, ensuring events are captured from a specific point in time.#
Scheduled triggers
Scheduled triggers run at predefined intervals, such as hourly, daily, monthly, or on specific dates and times.
Figure: Schedule trigger
At the scheduled time or interval, this trigger retrieves all events that meet the specified criteria. Unlike other triggers, scheduled triggers can return events that have already been processed.
Similar to batch triggers, scheduled triggers return events in batches. Users can define the maximum batch size. For example, if the batch size is set to 100 and 420 new events are detected, five jobs will be created: the first four containing 100 events each, and the fifth containing 20 events.
Scheduler Triggers (Clock/Timer)
Scheduler triggers let you define the exact timing for your recipe execution. There are two types of scheduler triggers:
New Scheduled Event:
Allows you to set the initial trigger time and specify the intervals for subsequent executions.
New Scheduled Event (Advanced):
Let’s you define specific days and times for the recipe to run. If only the minutes field is set (e.g., 30), the recipe will execute 24 times a day, every 30 minutes past the hour. If both hour and minute fields are specified, the recipe will run once per day at the designated time.
Trigger dispatches
Trigger dispatches define whether a trigger returns a single event or a collection of events. This section explores single triggers and batch triggers.
Single triggers
Single triggers are ideal for real-time, continuous data synchronization. For instance, they enable seamless movement of opportunities from Salesforce to NetSuite as sales orders immediately after an opportunity is closed. The majority of triggers in Integration Hub function as single triggers, ensuring efficient and instantaneous data transfer.
Batch Triggers
Batch triggers are designed to enhance throughput by retrieving trigger events in bulk rather than individually. They are particularly useful for handling high volumes of data, such as transferring large amounts of user activity data from Marketo to data warehouses like Redshift.
Similar to polling triggers, batch triggers fetch new events at regular intervals. During configuration, users can define the batch size, allowing for efficient processing of multiple events at once.
Figure: Batch triggers process trigger events in batches of user-specified sizes
Bulk triggers
Bulk triggers are designed for transferring large volumes of data efficiently. They operate by sending records as a CSV stream, enabling seamless data transfer from a source to a destination within Integration Hub. Unlike other triggers, bulk triggers do not allow individual access to records but instead facilitate the movement of large datasets.
This method is particularly useful for exports and high-throughput workflows, such as syncing Jira tasks or updating ServiceNow tables. Bulk triggers play a crucial role in ETL/ELT processes, serving as the primary mechanism for ingesting data from various sources into Integration Hub.
A key advantage of bulk triggers is their ability to process an unlimited number of records in a single job, simplifying tracking and monitoring for high-volume use cases.
Note
Bulk triggers require dedicated users to periodically monitor the completion status of the action. This functionality is not available in test mode.
The When first started, this recipe should pick up events from field allows recipes to retrieve past trigger events from a specified date and time. This ensures that the recipe captures events that occurred before it was activated, rather than only processing new events created after the start time.
For example, if a Salesforce New Object trigger has this field set to August 27, 2024, at 12:00 AM PST, the recipe will fetch all relevant events from that date onward.#
#
With this setting, when the recipe starts, it will only check for Jira Issues created after August 27, 2024, at 12:00 AM PST.
Not all triggers include the "When first started, this recipe should pick up events from" field. For triggers without this option, the starting point for fetching trigger events is predefined, typically as an offset from the recipe's start time. Common default offsets include:
The moment the recipe first starts
One hour before the recipe starts
One day before the recipe starts
This offset is usually specified in the trigger hint for the connector.
The When first started, this recipe should pick up events from value can only be set once and cannot be modified after the recipe has been started for the first time.
Timezone:
For triggers without a dedicated Timezone input field, the timezone of the user who creates the recipe is applied. When another user views the recipe, the displayed time adjusts to their timezone, but the underlying value remains based on the creator’s timezone.
This means:
If User A (timezone: -07:00 Pacific Time) creates a recipe, the input time is stored with their timezone.
If User B (timezone: +05:30 Asia/Kolkata) views or deploys the recipe, the time appears in their local timezone but retains the original value set by User A.
The workspace timezone does not affect the "When first started, this recipe should pick up events from" field unless explicitly configured within the connector.
Trigger conditions
Trigger conditions are additional rules that define which trigger events should be processed. For example, you can configure a condition to process only events related to specific user accounts.
Integration Hub evaluates trigger conditions after fetching the trigger events. This means that all new Jira Issues created in the last five minutes are initially retrieved, and then filtered based on the specified user account criteria. Consequently, for a New Jira Issue trigger, only issues associated with the designated user are processed. If a recipe is not picking up certain expected events, trigger conditions may be the reason.
Note:
Trigger conditions do not track field changes; they only verify whether the specified condition is met.
To add a trigger condition, enable the Set trigger condition toggle. The trigger data tree will then appear, displaying available variables to define the condition.
Define the trigger condition. For more information on the available conditions you can use, refer to the IF condition article. The above setup ensures that only Trigger Data, Condition and Value match will fetch the Issues.
To add an additional trigger condition, select + and choose from the OR or AND in the picklist. The selected operator will define how all additional trigger conditions will be added.
Define the additional trigger condition. Values are case sensitive and should be exact. Trigger Conditions can be (AND/OR) but not both.
#
Change Data Capture (CDC)
CDC is a method for detecting and tracking modifications in a database. It enables real-time or near-real-time data monitoring and synchronization, eliminating the need for continuous polling.
The main purpose of CDC is to capture and record changes such as inserts, updates, and deletions made to database tables. These changes are then transmitted to downstream systems, data warehouses, or analytics platforms, ensuring that all connected systems remain up to date with the latest data.
How CDC works in Integration Hub
Integration Hub utilizes triggers to track changes in a specified app or system. It manages Change Data Capture (CDC) by detecting modifications in real-time and sending notifications, enabling seamless data replication and synchronization across multiple systems.
Triggers operate with in-sequence delivery, maintain records of processed jobs, prevent duplicate processing, and ensure that jobs are completed in order. Trigger dispatches can be either single (for real-time data synchronization) or bulk/batch, which enhances throughput when handling large data volumes.
Supported Data Sources for Change Data Capture (CDC)
Integration Hub supports Change Data Capture (CDC) across a variety of data sources, including:
Software as a Service (SaaS) platforms
On-premise systems
Databases such as MySQL, PostgreSQL, and Snowflake
Integration Hub File Storage
Cloud storage services like Amazon S3
Enterprise Resource Planning (ERP) systems
Advanced CDC strategies
To enhance Change Data Capture (CDC), Integration Hub offers advanced strategies for efficient data handling, including filtering, managing large change volumes, and optimizing performance.
Filtering and Conditional Triggers
Advanced CDC techniques leverage filtering and conditional triggers to selectively capture and propagate specific data changes. This ensures precise control over which updates are processed and sent to downstream systems.
Handling Large Volumes of Changes
For high-volume data changes, batch processing and micro-batching can be utilized to efficiently process and transfer records. These techniques help regulate data load while ensuring timely synchronization.
Optimizing Performance
CDC performance can be enhanced using:
Built-in cursor management for tracking high watermarks
Auto-deduplication to prevent redundant processing
In-sequence processing to maintain data integrity
Variable-Speed Data Pipelines
Integration Hub enables variable-speed data pipelines, including:
Near real-time and continuous data streaming
Micro-batches for frequent polling
Batching for scheduled processing
This flexibility allows organizations to tailor data orchestration strategies based on specific business requirements.
Triggers specify the events to monitor in order to execute the action
Trigger events can be initiated within applications like Salesforce or Jira when specific actions occur, such as creating a new contact or updating an existing ticket. They can also be triggered when a new line is added to a file or based on a predefined schedule, executing at a set time or interval.
Integration Hub can receive trigger events in real-time, depending on the available API, or periodically check for events by polling the application.
Triggers are categorized based on when they detect new events (trigger mechanism) and how they organize or process these events (trigger dispatch).
Figure: Trigger mechanism
Trigger behavior
Recipes capture and queue trigger events in sequence for processing as recipe jobs. The recipe tracks its position using a cursor and advances through the trigger event queue synchronously, with configurable throughput. Integration Hub ensures no job duplication by maintaining a record of processed trigger events.
Figure: Trigger events are queued and processed by the Recipe as jobs
Integration Hub triggers operate with the following characteristics:
In-sequence delivery
Triggers are processed in chronological order, ensuring that the oldest records are handled first or in the sequence they are received by Integration Hub.
Durable cursor position
Recipes remember the jobs it has processed even across stopped and running states. Whenever a recipe is started, it will pick up where it stopped and begin processing all trigger events since it was stopped.
Figure: When the Recipe is stopped at 10/21/2017, 11.30 am and started again days or weeks later, it picks up trigger events from when it is stopped at 10/21/2017, 11.30 am
No duplication
Each recipe maintains a record of the trigger events it has seen, and will not process duplicate events.
Flow control
Recipes process trigger events synchronously, for example, only process a second job when the first job has been completed. For moving large volumes of data, you can maximize data throughput with batch processing and running multiple concurrent jobs.
Guaranteed delivery
For Integration Hub polling triggers, Integration Hub guarantees trigger event delivery. Even if servers experience temporary downtime, or if the network is unstable, Integration Hub ensures that triggers are picked up and processed in-sequence.
Webhook events, which powers most real-time Integration Hub triggers, inherently have the possibility of being lost. To mitigate this, most Integration Hub-built real-time triggers (a notable exception is the HTTP webhook trigger) have a backup polling mechanism that ensures missed webhook trigger events will be picked up.
Trigger mechanisms
Trigger mechanisms determine when a trigger will fire. In this section, we cover polling triggers, real-time triggers and scheduled triggers.
Polling triggers
Polling triggers periodically check for new events by querying the app at intervals based on your Integration Hub plan, which can be as frequent as every five minutes. Each poll may retrieve multiple events, potentially creating several jobs from a single poll.
When you first start a recipe, the polling trigger collects events from the date specified in the pick up events from field. For instance, a recipe might fetch all NetSuite customers created or updated since January 1, 2025, at 10:00 AM PST. After the initial fetch, the recipe continues to poll at regular intervals according to your plan. For example, every five minutes, it would retrieve new NetSuite customers created or updated in the last five minutes.
When you stop the recipe, polling triggers cease to fetch new events. If you restart the recipe, polling triggers retrieve all events that occurred while the recipe was stopped.
Example: Polling trigger
A Jira issue creation has a five-minute polling interval, as indicated in their polling triggers.
#T
#
Figure: Trigger poll interval
For instance, if a recipe is configured to poll every five minutes for new Jira issues, it retrieves any issues created during each polling cycle. A single poll may capture multiple newly created accounts, leading to the creation of multiple jobs.
If the recipe is stopped on February 1, 2025, at 12:00 AM PST, it will no longer fetch new trigger events. Upon restarting the recipe on March 10, 2025, Integration Hub will retrieve all Salesforce accounts created since February 1, 2025.
Real-time triggers
Real-time triggers are typically built on asynchronous notification mechanisms and require registration in the connected app, either via API or manually through the app interface—to indicate interest in a specific event. When the event occurs, the app sends a notification to Integration Hub, generating a trigger event.
Webhooks are a common mechanism for real-time triggers, and most real-time triggers in Integration Hub are webhook-based. The key advantage of webhooks is their efficiency—notifications are received instantly when an event occurs, eliminating the need for Integration Hub to check for new events at regular intervals.
Real-time triggers in Integration Hub (excluding HTTP real-time triggers) generally rely on webhooks supplemented by periodic polling. While traditional polling triggers might check for events every five minutes, real-time triggers use longer polling intervals, often around one hour. This polling mechanism also enables users to specify a From date when starting a recipe, ensuring events are captured from a specific point in time.#
Scheduled triggers
Scheduled triggers run at predefined intervals, such as hourly, daily, monthly, or on specific dates and times.
Figure: Schedule trigger
At the scheduled time or interval, this trigger retrieves all events that meet the specified criteria. Unlike other triggers, scheduled triggers can return events that have already been processed.
Similar to batch triggers, scheduled triggers return events in batches. Users can define the maximum batch size. For example, if the batch size is set to 100 and 420 new events are detected, five jobs will be created: the first four containing 100 events each, and the fifth containing 20 events.
Scheduler Triggers (Clock/Timer)
Scheduler triggers let you define the exact timing for your recipe execution. There are two types of scheduler triggers:
New Scheduled Event:
Allows you to set the initial trigger time and specify the intervals for subsequent executions.
New Scheduled Event (Advanced):
Let’s you define specific days and times for the recipe to run. If only the minutes field is set (e.g., 30), the recipe will execute 24 times a day, every 30 minutes past the hour. If both hour and minute fields are specified, the recipe will run once per day at the designated time.
Trigger dispatches
Trigger dispatches define whether a trigger returns a single event or a collection of events. This section explores single triggers and batch triggers.
Single triggers
Single triggers are ideal for real-time, continuous data synchronization. For instance, they enable seamless movement of opportunities from Salesforce to NetSuite as sales orders immediately after an opportunity is closed. The majority of triggers in Integration Hub function as single triggers, ensuring efficient and instantaneous data transfer.
Batch Triggers
Batch triggers are designed to enhance throughput by retrieving trigger events in bulk rather than individually. They are particularly useful for handling high volumes of data, such as transferring large amounts of user activity data from Marketo to data warehouses like Redshift.
Similar to polling triggers, batch triggers fetch new events at regular intervals. During configuration, users can define the batch size, allowing for efficient processing of multiple events at once.
Figure: Batch triggers process trigger events in batches of user-specified sizes
Bulk triggers
Bulk triggers are designed for transferring large volumes of data efficiently. They operate by sending records as a CSV stream, enabling seamless data transfer from a source to a destination within Integration Hub. Unlike other triggers, bulk triggers do not allow individual access to records but instead facilitate the movement of large datasets.
This method is particularly useful for exports and high-throughput workflows, such as syncing Jira tasks or updating ServiceNow tables. Bulk triggers play a crucial role in ETL/ELT processes, serving as the primary mechanism for ingesting data from various sources into Integration Hub.
A key advantage of bulk triggers is their ability to process an unlimited number of records in a single job, simplifying tracking and monitoring for high-volume use cases.
Note
Bulk triggers require dedicated users to periodically monitor the completion status of the action. This functionality is not available in test mode.
The When first started, this recipe should pick up events from field allows recipes to retrieve past trigger events from a specified date and time. This ensures that the recipe captures events that occurred before it was activated, rather than only processing new events created after the start time.
For example, if a Salesforce New Object trigger has this field set to August 27, 2024, at 12:00 AM PST, the recipe will fetch all relevant events from that date onward.
#
Figure: Create/Update Recipe
With this setting, when the recipe starts, it will only check for Jira Issues created after August 27, 2024, at 12:00 AM PST.
Not all triggers include the "When first started, this recipe should pick up events from" field. For triggers without this option, the starting point for fetching trigger events is predefined, typically as an offset from the recipe's start time. Common default offsets include:
The moment the recipe first starts
One hour before the recipe starts
One day before the recipe starts
This offset is usually specified in the trigger hint for the connector.
The When first started, this recipe should pick up events from value can only be set once and cannot be modified after the recipe has been started for the first time.
Timezone
For triggers without a dedicated Timezone input field, the timezone of the user who creates the recipe is applied. When another user views the recipe, the displayed time adjusts to their timezone, but the underlying value remains based on the creator’s timezone.
If User A (timezone: -07:00 Pacific Time) creates a recipe, the input time is stored with their timezone.
If User B (timezone: +05:30 Asia/Kolkata) views or deploys the recipe, the time appears in their local timezone but retains the original value set by User A.
The workspace timezone does not affect the "When first started, this recipe should pick up events from" field unless explicitly configured within the connector.
Trigger conditions
Trigger conditions are additional rules that define which trigger events should be processed. For example, you can configure a condition to process only events related to specific user accounts.
Integration Hub evaluates trigger conditions after fetching the trigger events. This means that all new Jira Issues created in the last five minutes are initially retrieved and then filtered based on the specified user account criteria. Consequently, for a New Jira Issue trigger, only issues associated with the designated user are processed. If a recipe is not picking up certain expected events, trigger conditions may be the reason.
Note
Trigger conditions do not track field changes, they only verify whether the specified condition is met.
To add a trigger condition, enable the Set trigger condition toggle. The trigger data tree will then appear, displaying available variables to define the condition.
Define the trigger condition. For more information on the available conditions you can use, refer to the IF condition article. The above setup ensures that only Trigger Data, Condition and Value match will fetch the Issues.
To add an additional trigger condition, select + and choose from the OR or AND in the picklist. The selected operator will define how all additional trigger conditions will be added.
Figure: Setup
Define the additional trigger condition. Values are case sensitive and should be exact. Trigger Conditions can be (AND/OR) but not both.
Figure: AND operator
#
Change Data Capture (CDC)
CDC is a method for detecting and tracking modifications in a database. It enables real-time or near-real-time data monitoring and synchronization, eliminating the need for continuous polling.
The main purpose of CDC is to capture and record changes such as inserts, updates, and deletions made to database tables. These changes are then transmitted to downstream systems, data warehouses, or analytics platforms, ensuring that all connected systems remain up to date with the latest data.
How CDC works in Integration Hub?
Integration Hub utilizes triggers to track changes in a specified app or system. It manages Change Data Capture (CDC) by detecting modifications in real-time and sending notifications, enabling seamless data replication and synchronization across multiple systems.
Triggers operate with in-sequence delivery, maintain records of processed jobs, prevent duplicate processing, and ensure that jobs are completed in order. Trigger dispatches can be either single (for real-time data synchronization) or bulk/batch, which enhances throughput when handling large data volumes.
Integration Hub supports Change Data Capture (CDC) for various data sources, including:
Software as a Service (SaaS) platforms
On-premise systems
Databases such as MySQL, PostgreSQL, and Snowflake
Integration Hub File Storage
Cloud storage services like Amazon S3
Enterprise Resource Planning (ERP) systems
Advanced CDC strategies
To enhance Change Data Capture (CDC), Integration Hub offers advanced strategies for efficient data handling, including filtering, managing large change volumes, and optimizing performance.
Filtering and Conditional Triggers
Advanced CDC techniques leverage filtering and conditional triggers to selectively capture and propagate specific data changes. This ensures precise control over which updates are processed and sent to downstream systems.
Handling Large Volumes of Changes
For high-volume data changes, batch processing and micro-batching can be utilized to efficiently process and transfer records. These techniques help regulate data load while ensuring timely synchronization.
Optimizing Performance
CDC performance can be enhanced using:
Built-in cursor management for tracking high watermarks
Auto-deduplication to prevent redundant processing
In-sequence processing to maintain data integrity
Variable-Speed Data Pipelines
Integration Hub enables variable-speed data pipelines, including:
Near real-time and continuous data streaming
Micro-batches for frequent polling
Batching for scheduled processing
This flexibility allows organizations to tailor data orchestration strategies based on specific business requirements.