·
4 min read

SQL Server Integration Services (SSIS) Change Data Capture Attunity feature deprecations

In December 2025, Microsoft will discontinue support for the Change Data Capture (CDC) components by Attunity and Change Data Capture (CDC) service for Oracle by Attunity of SQL Server Integration Services (SSIS). This blog provides details to help support customers in modernizing to new solutions well in advance of this change. The following components for which support will be discontinued:

SQL Server Intergration Services

Learn More

Customers using these two features are encouraged to modernize to Data Factory in Microsoft Fabric or Azure Data Factory. Customers can use incremental data loading capability from Azure Data Factory. Azure Data Factory can be used for on-premises data sources with a self-hosted integration runtime and is fully compatible with all impacted versions of SQL Server.

Data Factory in Microsoft Fabric enables you to move and transform data from various sources to various destinations. It’s a managed cloud service designed specifically for handling complex hybrid extract-transform-load (ETL), extract-load-transform (ELT), and data integration projects.

If you need any assistance as you plan your CDC modernization please contact Microsoft Support.

Learn more about Data Factory in Microsoft Fabric and Azure Data Factory:

Frequently Asked Questions

What’s Data Factory in Microsoft Fabric?

Data Factory in Microsoft Fabric is the next generation of Azure Data Factory which provides cloud-scale data movement and data transformation services that allow you to solve the most complex ETL scenarios. It’s intended to make your experience easy to use, powerful, and truly enterprise-grade.  Data Factory empowers you with a modern data integration experience to ingest, prepare and transform data from a rich set of data sources (for example, databases, data warehouse, Lakehouse, real-time data, and more). Whether you are a citizen or professional developer, you will be able to transform the data with intelligent transformations and leverage a rich set of activities. With Data Factory in Microsoft Fabric, we are bringing fast copy (data movement) capabilities to both dataflows and data pipelines. With Fast Copy, you can move data between your favorite data stores blazing fast. Most importantly, Fast Copy enables you to bring data to your Lakehouse and Data Warehouse in Microsoft Fabric for analytics.

What’s Azure Data Factory?

Azure Data Factory is a cloud-based data integration service that allows you to create data-driven workflows in the cloud for orchestrating and automating data movement and data transformation. It is a fully managed, serverless data integration solution for ingesting, preparing, and transforming all your data at scale. With Azure Data Factory, you can visually integrate data sources using more than 90 built-in, maintenance-free connectors. The service enables you to create and schedule data-driven workflows, called pipelines, that can ingest data from disparate data stores. You can build complex ETL processes that transform data visually with data flows or by using compute services such as Azure HDInsight Hadoop, Azure Databricks, and Azure SQL Database.

You can use Azure Data Factory to access and integrate data from on-premises data sources. One way to do this is by using a self-hosted integration runtime, which acts as a bridge between your on-premises data sources and the cloud-based Azure Data Factory service. This allows you to create data-driven workflows that can ingest data from your on-premises data stores and move it to the cloud for further processing and transformation.

How fast can I ingest data in Fabric data pipelines?

Fabric Data Factory allows you to develop pipelines that maximize data movement throughput for your environment. These pipelines fully utilize the following resources:

  • Network bandwidth between the source and destination data stores.
  • Source or destination data store input/output operations per second (IOPS) and bandwidth This full utilization means you can estimate the overall throughput by measuring the minimum throughput available with the following resources:
    • Source data store
    • Destination data store
  • Network bandwidth in between the source and destination data stores Meanwhile, we continuously work on innovations to boost the best possible throughput you can achieve. Today, the service can move 1 TB TPC-DI dataset (parquet files) into both Fabric Lakehouse table and Data Warehouse within five minutes—moving 1 billion rows under one minute; Please note that this performance is only a reference by running the above testing dataset. The actual throughput still depends on the factors listed previously. In addition, you can always multiply your throughput by running multiple copy activities in parallel. For example, using ForEach loop.

Where can I find more training resources to get started?