Difference between revisions of "Writing data integration and data migration scenarios"

From Toolsverse Knowledge Base
Jump to: navigation, search
Line 127: Line 127:
 
== Using Transformations ==
 
== Using Transformations ==
  
If we assume that any source or destination has a dataset behind it adding a transformation is a basically attaching it to the source or destination.  
+
If we assume that any source or destination has a dataset behind it adding a transformation is a basically attaching it to the source or destination. Transformations can be chained together.  
  
 
There are three types of transformations:
 
There are three types of transformations:
Line 141: Line 141:
 
* After success or failure   
 
* After success or failure   
  
 
+
Example below demonstrates using of some column level transformations:
 +
 
 +
<syntaxhighlight lang="java">
 +
<?xml version="1.0" encoding="UTF-8"?>
 +
<scenario parallel="true">
 +
    <name>Column level transformations using JavaScript</name>
 +
    <script>column_transformations</script>
 +
    <description>Extract data, change column name and value, add columns, remove columns</description>
 +
    <driver name="com.toolsverse.etl.driver.GenericJdbcDriver" parent="com.toolsverse.etl.driver.mysql.MySqlDriver"/>
 +
    <execute/>
 +
    <sources>
 +
          <source>
 +
              <name>employee</name>
 +
              <extract>
 +
                    <sql>select * from employee</sql>
 +
              </extract>
 +
          </source>
 +
    </sources>
 +
    <destinations>
 +
          <destination>
 +
              <name>employee</name>
 +
              <objectname>employee_date</objectname>
 +
              <metadata use="true"/>
 +
              <load stream="true"/>
 +
              <variables>
 +
                    <HIREDATE nativetype="int" sqltype="2"
 +
                        code="if ({HIREDATE} != null) {value = new Date({HIREDATE}.getTime()).getFullYear();}"
 +
                        field="HIREDATE" lang="JavaScript" name="hire_year"/>
 +
                    <SALARY exclude="true" field="SALARY"/>
 +
                    <BONUS exclude="true" field="BONUS"/>
 +
 
 +
                    <COMPENSATION add="true" nativetype="int" sqltype="2" name="compensation" lang="JavaScript"
 +
                    code="value = parseInt(dataSet.getFieldValue(currentRow, 'SALARY') + dataSet.getFieldValue(currentRow, 'BONUS'));" />
 +
                   
 +
                    <test add="true" nativetype="varchar(100)"
 +
                        sqltype="12" value="abc"/>
 +
                    <test2 add="true" nativetype="varchar(100)"
 +
                        sqltype="12" value="aaa"/>
 +
                    <MIDINIT exclude="true" field="MIDINIT"/>
 +
              </variables>
 +
          </destination>
 +
    </destinations>
 +
</scenario>
 +
</syntaxhighlight>
  
 
== Using scripting languages ==
 
== Using scripting languages ==
  
 
== Using SQL ==
 
== Using SQL ==

Revision as of 21:48, 27 July 2014

Basic stuff

Any data integration or data migration process can be described as an extract-transform-load (ETL) or extract-load-transform (ELT). With that is mind writing scenario becomes a process of splitting task on extracts, loads and transformations. Scenario glues them all together and adds purpose and logic.

[ETL engine] which powers Toolsverse products uses XML-based language to create scenarios. Before reading this page please take a look at ETL scenario language specification. XML was a natural choice because it enforces a structure: loads follow extracts and transformations follow extracts and loads. That said you can chose to stream data so extract and load combined in one logical operation and run things in parallel where actual order of extracts and loads is not guarantee.

ETL engine makes it possible to concentrate on task in hands without reinventing the wheel. It hides complexity of data integration so in most cases the same techniques and language artifacts are used to work with any SQL and Non SQL data sources. It is however possible to use the full power of the target database and do things like direct data load, anonymous SQL blocks, etc.

Please take a look at the ETL scenario examples.

Simple data migration scenario

Let take a look at the simple data migration scenario:

<?xml version="1.0" encoding="UTF-8"?>
<scenario>
     <name>Migrate data</name>
     <script>migrate_date</script>
     <driver>auto</driver>
     <sources>
          <source>
               <name>employee</name>
               <extract>
                    <sql>select * from employee</sql>
               </extract>
          </source>
 
          <source>
               <name>emp_resume</name>
               <extract>
                    <sql>select * from emp_resume</sql>
               </extract>
          </source>
    </sources>
     <destinations>
          <destination>
               <name>employee</name>
               <metadata>true</metadata>
          </destination>
 
          <destination>
               <name>emp_resume</name>
               <metadata>true</metadata>
          </destination>
     </destinations>
</scenario>

In this example we:

  1. Extract all data from employee and emp_resume tables in the source database
  2. Load data into destination, which can be a database or files
  3. If destination tables don't exist create them on the fly (<metadata>true</metadata> flag)

What we don't do:

  1. Specify what kind or source and destination we are working with
  2. How to connect to the source and destination
  3. How to create destination tables if they don't exist

Reading data from file-based data sources

In the previous example we created a simple data migration scenario which reads data from database using SQL and loads into another database or file-based data source.

Reading data from file-based data sources as easy as writing:

<?xml version="1.0" encoding="UTF-8"?>
<scenario>
     <name>Migrate data</name>
     <script>migrate_date</script>
     <driver>auto</driver>
     <sources>
          <source>
               <name>employee</name>
          </source>
 
          <source>
               <name>emp_resume</name>
          </source>
    </sources>
     <destinations>
          <destination>
               <name>employee</name>
               <metadata>true</metadata>
          </destination>
 
          <destination>
               <name>emp_resume</name>
               <metadata>true</metadata>
          </destination>
     </destinations>
</scenario>

In this example there is no SQL so engine assumes it needs to read data from the files. It doesn't matter what format files are in, as long as there is a suitable connector.

ETL engine naively supports XML, JSON, CVS, Excel and many other formats.

Things to do

Typically a full blown data integration process includes at least extract and load. When everything said and done data from the source are moved to the destination. In some cases it makes sense to split process in stages which performed in the different time frames. For example you can:

  1. Extract data and store them in some intermediate format, for example XML.
  2. Load data sometime later.

You can can control what to do:

   EtlConfig etlConfig = new EtlConfig();
 
   etlConfig.setAction(EtlConfig.EXTRACT);
 
   EtlProcess etlProcess = new EtlProcess(EtlProcess.EtlMode.EMBEDDED);
 
   EtlResponse response = engine.loadConfigAndExecute(etlConfig,
                    "test_etl_config.xml", etlProcess);
   EtlConfig etlConfig = new EtlConfig();
 
   etlConfig.setAction(EtlConfig.LOAD);
 
   EtlProcess etlProcess = new EtlProcess(EtlProcess.EtlMode.EMBEDDED);
 
   EtlResponse response = engine.loadConfigAndExecute(etlConfig,
                    "test_etl_config.xml", etlProcess);

Using Transformations

If we assume that any source or destination has a dataset behind it adding a transformation is a basically attaching it to the source or destination. Transformations can be chained together.

There are three types of transformations:

  1. Column transformations: validation, add or exclude column, change column type, calculate column value
  2. Dataset transformations: set operations such as join, de-duplication, pivot, etc
  3. Dimension transformations: add dimension and extract dimension

Transformations are event based and can be performed on:

  • Before extract or load
  • During extract or load (inline transformation)
  • After extract or load
  • After success or failure

Example below demonstrates using of some column level transformations:

<?xml version="1.0" encoding="UTF-8"?>
<scenario parallel="true">
     <name>Column level transformations using JavaScript</name>
     <script>column_transformations</script>
     <description>Extract data, change column name and value, add columns, remove columns</description>
     <driver name="com.toolsverse.etl.driver.GenericJdbcDriver" parent="com.toolsverse.etl.driver.mysql.MySqlDriver"/>
     <execute/>
     <sources>
          <source>
               <name>employee</name>
               <extract>
                    <sql>select * from employee</sql>
               </extract>
          </source>
     </sources>
     <destinations>
          <destination>
               <name>employee</name>
               <objectname>employee_date</objectname>
               <metadata use="true"/>
               <load stream="true"/>
               <variables>
                    <HIREDATE nativetype="int" sqltype="2"
                         code="if ({HIREDATE} != null) {value = new Date({HIREDATE}.getTime()).getFullYear();}"
                         field="HIREDATE" lang="JavaScript" name="hire_year"/>
                    <SALARY exclude="true" field="SALARY"/>
                    <BONUS exclude="true" field="BONUS"/>
 
                    <COMPENSATION add="true" nativetype="int" sqltype="2" name="compensation" lang="JavaScript"
                    code="value = parseInt(dataSet.getFieldValue(currentRow, 'SALARY') + dataSet.getFieldValue(currentRow, 'BONUS'));" />
 
                    <test add="true" nativetype="varchar(100)"
                         sqltype="12" value="abc"/>
                    <test2 add="true" nativetype="varchar(100)"
                         sqltype="12" value="aaa"/>
                    <MIDINIT exclude="true" field="MIDINIT"/>
               </variables>
          </destination>
     </destinations>
</scenario>

Using scripting languages

Using SQL