• "Hands Off" Recovery

    Anyone who has been on a night time support rotation can relate to the 3 AM phone call that gets you groggily out of bed, has you fixing some simple, but important problem, and then leaves you staring at the ceiling by 4 AM wondering if you should just go into work, because you know you aren't getting back to sleep. All of the engineers in our office have been there, done that, and didn't like it very much. That's one of the reasons we focus so much on resilient operations for SymmetricDS including our "Hands Off" Recovery of data synchronization. "Hands Off" recovery is pretty much exactly what it sounds like. SymmetricDS has been built from the ground up to recover and keep databases in sync without manual intervention any time it possibly can.

    "Hands Off" Recovery features in SymmetricDS have been refined over the years as we have built and helped maintain large data synchronization scenarios for our customers. When synchronizing two or three databases, manual intervention doesn't matter as much, but when synchronizing ten to fifty thousand databases it really becomes a "must have."

    With large node network data sync scenarios, there are always a handful of databases that are offline or unreachable for one reason or another. A backhoe cuts through a network line, a hurricane causes long term power failures, a database becomes corrupt or needs regularly scheduled maintenance. Regardless of the reason, these outages need to be handled with the minimum amount of intervention, zero if possible. So when a database in your node network goes offline, SymmetricDS detects it, lets you know that it is having problems reaching that database, and then patiently keeps trying until the underlying cause rectifies itself. That might mean a network technician restoring network connectivity, a lineman bringing power back online or a database administrator fixing things up on the database end. Regardless, once the underlying problem is solved, SymmetricDS simply picks up where it left off and gets data back in sync. No manual intervention needed. So, you can sit back and watch it happen, or better yet, sleep through it!

  • Add A Replication Node

    Add a new node (database for replication) to the SymmetricDS sync scenario. This could be a source or target or both. It will register with the master node and receive all the configuration and will immediately be ready for replication.

  • Cancer Research UK data synchronization and transformation

    Cancer Research UK utilizes SymmetricDS for flagship Race for Life event website.

  • Capture all change events into an event table

    In addition to replicating data changes from source to target you may also wish to populate a table on the target to monitor all events.  In this blog I will show you how to create a simple load filter that will record all changes captured by SymmetricDS into a simple audit table.  

    This table will hold just four columns to record the change.

    • Table Name - The table the change occurred on.
    • Event Type - Single character to represent Inserts(I), Updates(U), or Deletes(D).
    • Primary Key Values - The value of the primary key for the row that was changed (comma separated if more than one).
    • Primary Key Names - The name of the primary key column at the time of the change (comma separated if more than one).

    First lets create the audit table on the target node by running the following SQL statement.

    create table audit (
    table_name varchar(50),
    event varchar(1),
    pk varchar(50),
    pk_columns varchar(250)

    In the SymmetricDS Pro web console go to the Configure->Load Filters screen

    Load Filter Screen

    Create a new Load Filter of type BSH with the target table set to * so that all tables setup for replication will be filtered.  You will also need to select the appropriate group link, remember the load filter will fire on the target node.

    After saving the filter select the edit scripts button.

    Select the "After Write Script" from the drop so that the script we will create will fire after each data event is processed.

    Provide the following beanshell code to the text editor and hit save. 

    String tableName = table.getName();
    String eventType = data.getDataEventType().getCode();
    String[] pkData = data.getPkData(table);
    String[] pkCol = table.getPrimaryKeyColumnNames();
    String pkDataAsString = "";
    String pkColAsString = "";
    for (int i=0; i < pkData.length; i++) {
        if (pkDataAsString.length() > 0) {
            pkDataAsString = pkDataAsString + ",";
            pkColAsString = pkColAsString + ",";
        pkDataAsString = pkDataAsString + pkData[i];
        pkColAsString = pkColAsString + pkCol[i];
    String sql = "insert into change_data_events (table_name, event, pk, pk_columns) 
    values (?,?,?,?)"; engine.getSqlTemplate().update(sql,
    new Object[] { tableName, eventType, pkDataAsString, pkColAsString});

     Save your script and your all set to start recording change data events into your new table.



  • Choosing The Right Replication Approach

    There are three primary methodologies used in database replication: log based, trigger capture, and time based. Determining which of these approaches is best for your use case can be difficult without understanding the limitations of each type. This article will help identify the differences, pros, and cons of each approach.

    Log Based

    Log based replication is probably the most common and popular approach that many tools support in order to capture changes on a source database. This approach is often viewed as the least invasive approach since the database remains unchanged and only the log files provided by the database are used to determine changes. In general, this is true, however, there is some potential overhead if the database logging must be turned on or set up. Turning on or setting up logging on an active production environment might lead to unexpected changes in performance. If your production database is already configured to enable transaction logging then there would be no additional performance impact on the applications using the database already.

    Some other factors should be considered while setting up a log based replication model. The first is the turnover of the log files. Determining how frequently the logs fill up and are removed so that new logs can be created is important to understand. If your log replication tool is not running or is not able to keep up with the load at peak times before logs are rolled over, changes could be lost during these intervals.

    Another thing to consider in a log based approach is the number of tables that would contribute to the logging versus how many tables are being configured for replication. For example, if there are 500 tables in the database that are potentially taking on changes but only 10 of these tables need to be replicated, there might be a great deal of overhead in processing the logs for a small set of tables compared to the total set of tables contributing to the logging.   


    • Potentially low or no overhead on the existing database.
    • No need for alterations to an existing database structure.


    • If logging is not turned on already there may be a performance impact once activated
    • If logs are filled and removed to clear space for new logs to be written there is the potential for loss of data
    • If the amount of tables involved in replication is significantly less than the total table set the replication tool performance in the replication tool might be affected.


    Trigger Based

    Trigger based replication is built on the concept of utilizing database triggers to capture each change as part of the transaction that the change occurs on. Often this is viewed as more invasive than log based because the source tables need to be altered to include the triggers. It is important to understand in this approach a replication tool will be responsible for maintaining these triggers in the event they are not present or the source table is changed.   

    One of the key values in this approach is that the capturing of a change is directly tied to the user acting on the system. This is because the same transaction that is being used to commit a user change is also ensuring the trigger fires successfully. This ensures all changes are captured or they would roll back with the user change if there was an issue.   

    Another factor to consider with trigger based solutions is around the data storage of all these changes. Most trigger based approaches will write directly to a set of tables within the database that are created and maintained by the replication tool. As long as the database is sized to scale, these changes will remain present in the database itself and will not have the threat of data loss like the log based approach might have in the event of rolling over the log files.


    • Transactional guarantee that data is committed to replication table once committed to application tables.
    • Less threat of data loss as long as the database is scaled to grow as more data is collected 


    • Involves altering existing application tables to support replication
    • Performance could vary based on each replication tool's implementation of the change capture triggers.


    Time based

    Time based replication consists of selecting changes out of the source directly through queries. Although this requires the least permissions (select access) on the source database, there are some assumptions that need to be qualified in order to work properly. First, there needs to be a column that represents the last time a row was changed (usually a "last updated" column). If there is not a way to construct a query against each table to determine what has changed, this approach will not work properly.   

    If all the necessary tables for replication provide data within them regarding when they were last modified the next thing to identify is whether or not deletes need to be captured. In most straightforward time based replication, deletes are not supported. The source database would need to perform what is known as "soft deletes" where a deleted column is toggled on or off if the record is to be considered in use. Then the process of updating this soft delete column from a false to true and still utilizing the last updated column the change could be processed through time based replication. Otherwise, a true delete on the source will not be present when selecting changes as the record is no longer present to be retrieved.



    • Only select access is required on the source database.
    • No additional tables or alterations to existing tables are necessary.


    • Deletes are not supported unless the time based process reads in all tables on source and target and performs a full check. This is probably not an option on an active system or if you need constant replication that is not done just nightly.
    • All tables participating in the time based replication must have a column that can be used to determine the last change. 

  • Data Normalization using Transformations

    When integrating data from one database to another, sometimes the schemas are not the same. JumpMind has come across a number of projects where there was a need to transform a very denormalized database into more of a canonical or normal form.

    Sometimes normalization is needed as part of a migration path to a newer and better data model.

    Sometimes normalization is required just to feed another system.

    Whatever the reason, SymmetricDS can:

    • Capture the data change at the source database
    • Transform it
    • Transport it (across the across the web or just across your data center)
    • Transform it more (if needed)
    • Load it into your target database

    We will take a look at a simple example to demonstrate how this can be accomplished. We will transform an AppUser table that contains name, role and password information into three tables at the target: user, user_role, and user_password.

    For the purposes of this article, we will use embedded H2 databases that will be created automatically for you.

    In SymmetricDS, each database that is participating in synchronization is represented by a SymmetricDS node. Nodes belong to groups that are linked. If you are not already familiar with SymmetricDS it might make sense to read an overview of SymmetricDS.

    Do the following to get SymmetricDS installed and ready to go:

    Each properties file represent a node. In this case both nodes are hosted in the same SymmetricDS installation. This doesn't have to be the case. For example, in a cloud deployment you might have a SymmetricDS installation in your datacenter that communicates over HTTP with a SymmetricDS installation hosted in the cloud.

    Open the web console at http://localhost:31415/app. You are viewing the source node. You can switch the context to the target node by selecting it in the dropdown at the upper right corner.

    The tables structure we discussed has been created for you automatically. If you select the Explore tab, you will be able to view the tables we will be working with in the database explorer. The source tables have been created in the TEST schema. The target tables have been created in the USERS schema.

    Now go the Configure > Transforms screen. Select “Auto Create”. The “Implied” option means that all columns from the original table will be passed through. We want to explicitly specify the column mapping ourselves, so select “Specified”.

    Now Edit the transform that was created. Set the “Target Schema” to “Users” because this is where our target tables reside. Set the “Target Table” to “User”. Save the transform.

    Now go ahead and “Auto Create” the mapping from “AppUser” to “User_Password” and “User_Role”. Note, in order for the tables to show up in the “Auto Create” dialog you will need to select “Show tables with transforms”.

    Next we need to setup the column mappings.

    Select the first transform and press “Edit Columns”. You will need to add three columns. The user_id to id transform can be a copy transform. Be sure to mark it as the PK.

    The next two transforms will be bsh transforms. We will map the source name column to a first_name and last_name column in the target table. We will look for the first space in the name column. Everything before the space is the first_name. Everything after the space is the last_name.

    This is the bsh script for first_name:

    if (NAME.contains(" ")) { return NAME.substring(0, NAME.indexOf(" ")); } else { return NAME; }

    This is the bsh script for last_name:

    if (NAME.contains(" ")) { return NAME.substring(NAME.indexOf(" ")); } else { return NAME; }

    Now configure the column mappings for “User_Password” and “User_Role”.

    Make sure you have saved and closed all of your edit tabs. It's not time to test the transform. Insert the following SQL statements at the source. Inspect the target tables and experience the normalization of your data!

    insert into test.AppUser values(1, 'Billy Smith', 'user', '$%^#$@#$%^', null); insert into test.AppUser values(2, 'Willy Smith', 'user', '$%^#$@#$%^', null); insert into test.AppUser values(3, 'Wilma Smith', 'user', '$%^#$@#$%^', null);


  • International retailer data synchronization

    An international retailer utilizes SymmetricDS to synchronize all store data across the United States, Europe, and Asia.

  • Load Data to Target Database

    A wizard to walk users through the process of loading data between databases. The load will use the configuration in place for the nodes selected. This might be necessary if the source and target databases are not in sync before change capture being setup.

  • Mobile medical data synchronization

    A healthcare organization consolidates data and improves operations using SymmetricDS multi master replication.

  • MSQL Server Replication vs SymmetricDS

    Data has become an ever increasing crutch in our daily lives and drives more and more business decisions than ever. With the steady increase in data storage technologies consumers are faced with greater challenges to keep the data synchronized for accurate reporting, analysis, and overall processing. Traditional databases such as SQL Server, Oracle, Postgres, and MySQL still play a major role in many companies. However with the rise of the cloud platforms and NoSQL technologies commonly used for warehousing and analytics the need to support a variety of technologies within each business is important.


    For those SQL Server shops out there, SQL Server Replication is an out of the box solution that can be used to replicate and keep SQL Server databases in sync. However, it may not be that easy and it may not be enough to keep your business moving forward in today's ever expanding world of data. Taking a deeper dive into some of the pros and cons of utilizing SQL Servers Replication engine over the alternatives will be the focus of this article and provide some better insight on making the right decisions for your data.

    Setup and Maintenance

    SQL Server setup and maintenance can be complex and require someone with a fair amount of experience to setup. Such in-house expertise may or may not be available and needs to be considered when choosing a path forward regarding data replication. If you already have a person in your organization with SQL Server replication experience, this may be a suitable option. However if you do not have such in-house expertise, an alternative tool that simplifies setup and maintenance might be a better option.

    SymmetricDS setup and configuration is consolidated through a few simple wizards that walk you through setting up your databases as well as your tables for replication. These configurations can generally be built in minutes. All this configuration is also provided as an export/import so the configuration can easily be deployed from a test environment into a production environment with a few steps. The built-in monitors and notifications can also be used to notify your team when issues arise with the replication so that they can be addressed immediately. 

    Version Compatibility

    SQL Server has a complex version matrix that should be used when a mix of SQL Server database versions and licenses are deployed in your company. Below is a link to the version matrix that should be followed to ensure all your deployed SQL Servers will line up with your replication needs. This can also provide overhead in upgrades and new deployments that need to be considered while using the out of the box replication tools from SQL Server.

    SymmetricDS offers out of the box backwards compatibility with the various versions so that the end user is not affected in an environment that has a mix of versions deployed. SymmetricDS will also work with all varieties of SQL Server from Express, Standard, and Enterprise all the way back to the MS SQL Sever 2000 version. There is no need to be concerned about having a mix of SQL Server licenses as well as versions in your configuration anymore. You are free to focus on the data that needs to be replicated and where it should be transmitted. 

    Cross Platform Replication

    SQL Sever native replication is a great solution if your data all resides within SQL Server databases and will continue to reside in these platforms in the future. However with the continued growth in analytics and other platforms dedicated to these use cases, often other data storage platforms are needed. SymmetricDS comes out of the box compatible with over 30 other databases in addition to all the flavors of SQL Server. This flexibility provides a tool for all your current replication needs as well as the ability to explore new platforms in the future.


    Cost is always a consideration when purchasing software along with the long term pricing model that comes along with it. Microsoft charges per core and requires you purchase Enterprise level for replication. SymmetricDS provides a subscription model which is based on usage tiers so you only pay for what you need. The tiers also provide a range of usage levels so that there is flexibility for increases and decreases in your usage levels.   


    Locking into a specific vendor is often overlooked and can be considered a disadvantage. Purchasing multiple products from a single vendor creates a dependency on that vendor and can substantially increase the cost to switch. Using third party solutions that are platform agnostic creates flexibility and lowers switching costs. SymmetricDS provides replication for numerous databases and allows for cross-platform replication. This provides more leverage in the future for changes to underlying platforms as necessary. 


    SymmetricDS support has service level agreements provided as part of various support offerings to guarantee a quick response. The SymmetricDS support team consists of the engineers and developers of the software with vast technical knowledge of the product and experience implementing it with customers. With a larger organization it is not always feasible to find the quality of support and response times as the support staff is usually more disconnected from the development and implementation of the software provided.

  • Pausing Replication In SymmetricDS

    There are several different ways replication can be paused in SymmetricDS to fulfill different use cases. This blog will identify the differences between the various approaches so that you can make the best decision for your use case



    Pausing replication might not be needed on most replication scenarios however if the situation arises where it is necessary to stop the flow of changes here are some options to consider. Each option has some different effects on the system and what happens when replication is started again.  


    Channel Enabled/Disabled

    A very quick and simple way to stop replication at the channel level (or for all channels if needed) is to just toggle the enabled flag. This can be found on the Configure->Channels screen under the "show advanced options". It can also be scripted using update sym_channel set enabled=0 where channel_id = ?.

    When paused 

    This will take an effect on routing and all disabled channels will be skipped during the routing process. This allows changes to continue to be captured but will remain unrouted until the channel is enabled again. The purge process will also bypass these changes as they will not be eligible to purge so the data is safe and ready to sync when enabled. Be sure not to leave a channel disabled for too long though or it could create a sym_data table excessive in size until these changes are allowed to replicate again (and ultimately be purged).

    When started again

    Once the channel is enabled the next routing job run will pick it up again. It will route as much data as the channel allows (see max data to route on a channel) and also fill up batches when possible (see max batch size on a channel). The larger batch size will allow it to catch up quickly by creating the larger batches to process when running again.


    Node Channel Control Table

    To control replication at the node and channel combination the sym_node_channel_ctl table can be used. This table combines a node and a channel to determine how replication will be controlled. There are two additional flags on the table to either suspend or ignore batches that meet this node and channel combination.   

    When paused 

    Routing will continue to process changes as normal but when a batch is created if the node and channel combination match any rows in the sym_node_channel_ctl table the flags will be used to determine how the batch is processed. If the ignore flag is on the batch will be marked for purging. If the suspend flag is on the batch will remain in a paused status indefinitely and will not be purged.

    When started again

    If a row is removed from this table that is preventing any suspended batches or the suspended flag is turned off the next push or pull job that processes the batch will pick it up and process per normal operation again.


    Node Group Channel Window Table

    To control replication at the node group and channel combination over a specific time period the sym_node_group_channel_wnd can be used. This table is setup for a node group, channel, start time, and end time to control when the replication is active. This might be used if replication is not to occur off hours or during some outage window.

    When paused 

    Routing will continue to process changes as normal but when a batch is created for a node in a node group and a channel that matches the sym_node_group_channel_wnd table it will check the start and end time to determine if it can be processed. All unprocessed batches will remain in a paused state indefinitely and will not be eligible for purging.

    When started again

    If the entry int this table is removed or the current time extends beyond the start and end time range the batches will process again under normal operation on the next push or pull job execution.


    Offline Node

    Taking a node offline either intentionally or unintentionally will pause all replication in and out of the node indefinitely. Batches in this case will remain idle and will not process while a node is offline. They will continue to back up and can have side effects on the database storage and disk space if the time for which the node remains offline is long. If a node remains offline too long it might be faster to have it unregister and register again with an initial load. In some excessive cases the initial load size might be less than the size of all the changes that accumulated while offline.


    Trigger Router Enabled/Disabled

    Replication can also be paused by disabling table routing (sym_trigger_router). By setting the enabled flag to false here it will no longer capture changes for the tables that were disabled. This can be a bit more evasive than the options above as it will no longer detect changes and physically remove triggers while disabled.    These change will be lost and the only way to get back in sync again is through an initial load. So this approach should be used with caution.


    Trigger or Router Insert, Update, Delete Toggled

    If you need to control the DML type that is replicated or pause all three types (insert, update, and delete) you can do so at either the trigger or router level.


    At the trigger level it will prevent or remove triggers based on which DML is turned off. For example, if you turn off deletes then the delete trigger will be removed and only inserts and updates will be replicated. If you disable (uncheck) all three all replication for a given table will be disregarded.


    At the router level it will put these changes into batches that are marked as unrouted (node id of -1). This will continue to capture changes but they will be immediately setup for purging and not sent.


  • Permissions

    All SymmetricDS synchronization configurations require database connectivity. As a result questions often arise around the topic of database permissions. Specifically what permissions are required for the SymmetricDS user and do the SymmetricDS tables need to run in the same catalog or schema as the tables to be synchronized.

    With such a large variety of databases that are supported by the tool it is difficult to address this topic with one global answer. In reality many of these database platforms have different definitions for catalogs, schemas, and even security roles. As a result this blog will address the topic from conceptual level hoping that you can apply the approach that works best for your scenario.

    1. Quick & Easy

    Connect your nodes using a database user account that has the same read, write, delete access that your application(s) utilize. As long as this account has the ability to create triggers and functions as well you should be all set. This will allow SymmetricDS to install all configuration and runtime tables that it uses alongside your tables (prefixed with SYM_ by default). The downside to setting up in this manner is that the same database user account will be used for multiple purposes and might make it difficult to differentiate any issues should they arise.

    2. Somewhere in the Middle

    Provide a newly created and dedicated database user account to be used during SymmetricDS node setup which will default to the same database, catalog, and schema that the tables to be synchronized also reside in. This requires a minimal amount of extra setup to create a new user account that also has the ability to read, write, and delete. Again the SymmetricDS configuration and runtime tables will be created alongside your tables but the different user account will help in debugging any issues.

    3. Advanced

    In this approach again you will need to create a new dedicated user account for SymmetricDS. You will also setup this user account to use a different database, catalog, and schema. This will ensure the SymmetricDS configuration and runtime tables are created in a different location all together than the location of the tables to be synchronized. This is also where things can get tricky from one database platform to the next. The details of this setup approach will vary from database to database. At a high level though the user account needs to be able to read, write, and update tables in its default location as well as create and alter tables. The user account will also need the ability to read, write, and delete from the database, catalog, schema that the tables to be synchronized reside. Finally the user account will also need the ability to create triggers and functions in this location as well.

    See the documentation for a specific example of how to setup permissions using SQL Server

  • Quick Config - Setup Tables for Replication

    Build some simple configuration to specify which tables will be replicated and the direction they will sync.

  • Setting Up Time Based Replication With SymmetricDS

    Time Based Replication in SymmetricDS

    If the source data to be replicated resides in a read-only (select only) database, common replication approaches like log based or trigger based are not an option. Through some additional configuration and a few prerequisites on the source database SymmetricDS can still be used for replication in a time based approach.


    What is Time Based Replication?

    Time based replication is when data is selected out of the source directly and used to replicate to the target(s). Unlike log based or trigger based where there is a precise capture of each change. Time based will require some assumptions or additional overhead to determine the changes. If for example, the source table(s) have a column that indicates the last update times this can be used to obtain changes (although not deletes, discussed below). Another approach might be to select all data out of the source and compare it to the target. Although this seems more precise it might not be feasible on larger data sets or databases that incur a lot of consistent changes.   

    If you are limited though to read-only access to the source data then the time based approach might be your best and only option to replicate changes to a target database.



    First, we need to identify the limitations of time based replication to determine if this approach will work with your source database.


    Does it contain a last updated column?  

    Do the tables to be replicated contain a column that has an accurate date-time of the last time a row was modified? If not it might not be possible to determine the changes on this source database without a full data comparison against source and target. For this article, we will only focus on the cases where such a last updated column exists on the source tables.


    Do deletes matter?

    Deletes will not be captured in this approach unless it is a soft delete where a column (maybe named “deleted”) indicates the row is no longer active. In this case, it is just an update to a column to represent a delete and will replicate. If you need to fully replicate deletes again this would be outside the scope of this article and a full comparison would need to be done against the source and target.


    SymmetricDS Configuration


    • Setup the primary/registration node as the target node
    • Setup the select only source node as an “extract only node”

    Setting up tables

    *There will be two table triggers configured in SymmetricDS for each table. One set is to be used for a full initial load to send all data in the source tables. The second will use an initial load select to only select data based on a Time based interval.

    • Create a new channel named “time-based” with default settings.
    • Create a new channel named “time-based-reload” with default settings AND check the reload channel box.

    • Use the “Auto Create” wizard on the configure tab and the table triggers screen. -
    • Select tables for Time based replication
    • Select the “time-based” channel
    • Provide a suffix “-time-based”

    • On the configure tab and the table triggers screen type in the filter “-time-based” to filter only the triggers just created.
    • Use the select all to perform a bulk edit.

    • While bulk editing, turn off the insert, update, and delete on these triggers.
    • While bulk editing, set the reload channel for all these tables to “reload-time based”


    • On the configure tab and the table routing screen type in the filter “-time-based”.
    • Use the select all to perform a bulk edit.

    • While bulk editing, provide an initial load select based on last update column and frequency.

    Table 1 - Below is a list of examples how to go back 1 minute using a where clause on several common dialects

    Oracle last_updated_time >= (sysdate-(1/1440))
    MSSQL last_updated_time > DATEADD(MINUTE, -1, GETDATE())
    MySQL last_updated_time >= DATE_SUB(NOW(),INTERVAL 1 MINUTE); 
    Postgres last_updated_time >= (NOW() - INTERVAL '1 minute' )
    H2 last_updated_time > DATEADD('MINUTE',-1, CURRENT_DATE)



    Custom Job

    Create a custom job that is configured and scheduled to pull over updates from the source periodically.  

    • Choose the source nodes group
    • Choose BSH as the job type
    • Schedule the job to run based on the interval you chose above while setting up the table routing (every 1 minute for example).
    • On the edit script section of the job provide the following. Replace the source and target node values accordingly.import org.jumpmind.symmetric.model.TableReloadRequest;

    TableReloadRequest reloadRequest = new TableReloadRequest();
    reloadRequest.setSourceNodeId("YOUR SOURCE NODE ID");
    reloadRequest.setTargetNodeId("YOUR TARGET NODE ID");
    reloadRequest.setCreateTime(new Date());


  • Setup your first node (The Master Node)

    Setting up a "master" node for database replication is just like setting up any other node with the exception that the master node is responsible for registering new nodes onto the SymmetricDS data synchronization scenario.

  • SMARTMD Multi-Tenant Medical Data Sync

    SMARTMD Leverages SymmetricDS to Provide Medical Clinic Data Synchronization.

  • Stop Guessing if Your Data is Correct

    SymmetricDS promises to sync your data, but how do you know it really worked? How do you really know 2 databases are in sync? Or, maybe you have well-meaning users that come along after the sync, changing data unexpectedly, and throwing production out of whack.

    Whatever the case, wouldn’t it be nice to be able to compare any 2 databases, and know exactly what the differences are and be able to prove they are in sync? That’s where a new, experimental feature of SymmetricDS comes in. It’s called simply dbcompare. Dbcompare is slated for SymmetricDS 3.8, but if you watch closely, it’s fully included starting in SymmetricDS 3.7.30.

    We’re excited about dbcompare because it comes out of the gate with some exciting features:

    • Compare any 2 databases, even if they are on different platforms. Have a customer table in SQL Server and a copy of that data in MySQL? Yep, dbcompare can compare the data.

    • Dbcompare can use your SymmetricDS config for an nice, automatic comparison of 2 of your nodes. This includes using table transforms that map table names and column names. So for example, if you have an H2 “transactions” table sync’ing up to a table called “TX_TRN” on Oracle, dbcompare can handle that comparison across different database platforms and table names.

    • Dbcompare is designed to be a standalone tool, as well, that can run on and compare 2 databases, even if those databases have nothing to do with SymmetricDS. Download a copy of SymmetricDS and use the dbcompare command line utility against the databases you want to compare.

    • Dbcompare generates a useful report that spells out the state of all tables, whether they are different or not.

    • The tool can also optionally general full SQL output that will will bring the target database into sync with the source database. So that’s all the INSERTs, UPDATEs, and DELETEs, with the correct formatting, syntax, etc. for the target database.

    • Dbcompare is part of the open source core of SymmetricDS.


    Dbcompare could be a nice tool to set up a cron job for extra monitoring of your syncs, or something used for occasional troubleshooting.


    Technically, dbcompare is a command line utility in the “bin” directory of the SymmetricDS installation. To run dbcompare, you will need two files with database connection information. One properties file is the “source” database, and the other is the “target” of the comparison.


    Here’s an example of running dbcompare for two source tables “transaction” and “sale_return_line_item”:


    <symds>/bin/dbcompare transaction,sale_return_line_item -s ./path/to/ -t ./path/to/ --output_sql ./%t.diff.sql


    Let’s break this down a little:

    transaction,sale_return_line_item = a comma separated list of tables to include in the comparison. You can omit this if the source database uses SymmetricDS. If omitted, then dmcompare will run for all tables with SymmetricDS triggers.

    -s = The source file to use for the comparison. If you’re using SymmetricDS, just grab the file from your engines folder, or point this to that file. If you are not using SymmetricDS, you’ll need to set up your DB connection info in a new properties file.

    -t = This is the target file for the comparison.

    --output_sql = This specifies a file to write out SQL statements that would being the target database into sync with the source datasource. Note the use of “%t” in the name here. If you use that %t, you will get one .sql script per table, with the %t replaced with the table name.


    Running this command generates a report (to standard out, which you can redirect > or pipe to file) and the SQL files.


    Here’s an example of the report and SQL.



    The 10 missing transactions server to create this TRN_TX.diff.sql:


    Feel free to download dbcompare and try it out, or to contribute to our community site ( . Let us know what you think!


    View the source code here on githib:

  • SymmetricDS 3.8 Webinar On-Demand

    Watch a pre-recorded webinar to demonstrate SymmetricDS Pro. Learn how to connect with two MySQL databases to configure bi-directional synchronization, and add a transform to all tables that adds a column populated with the source node identifier.
  • SymmetricDS Data Migration

    Data migration using SymmetricDS Pro to easily move and transform data to support a new database platform, consolidate data, refresh technology, or simply upgrade systems with no downtime.
  • SymmetricDS Data Sheet

    Data sheet for SymmetricDS Pro cross platform database replication and change data capture, including key features, platform support, system requirements, product use cases, and support options.

Page 1 of 2