Model Context Protocol (MCP) finally gives AI models a way to access the business data needed to make them really useful at work. CData MCP Servers have the depth and performance to make sure AI has access to all of the answers.
Try them now for free →Migrating data from Elasticsearch to Databricks using CData SSIS Components.
Easily push Elasticsearch data to Databricks using the CData SSIS Tasks for Elasticsearch and Databricks.
Databricks is a unified data analytics platform that allows organizations to easily process, analyze, and visualize large amounts of data. It combines data engineering, data science, and machine learning capabilities in a single platform, making it easier for teams to collaborate and derive insights from their data.
The CData SSIS Components enhance SQL Server Integration Services by enabling users to easily import and export data from various sources and destinations.
In this article, we explore the data type mapping considerations when exporting to Databricks and walk through how to migrate Elasticsearch data to Databricks using the CData SSIS Components for Elasticsearch and Databricks.
Data Type Mapping
| Databricks Schema | CData Schema |
|---|---|
|
int, integer, int32 |
int |
|
smallint, short, int16 |
smallint |
|
double, float, real |
float |
|
date |
date |
|
datetime, timestamp |
datetime |
|
time, timespan |
time |
|
string, varchar |
If length > 4000: nvarchar(max), Otherwise: nvarchar(length) |
|
long, int64, bigint |
bigint |
|
boolean, bool |
tinyint |
|
decimal, numeric |
decimal |
|
uuid |
nvarchar(length) |
|
binary, varbinary, longvarbinary |
binary(1000) or varbinary(max) after SQL Server 2000 |
Special Considerations
- String/VARCHAR: String columns from Databricks can map to different data types depending on the length of the column. If the column length exceeds 4000, then the column is mapped to nvarchar (max). Otherwise, the column is mapped to nvarchar (length).
- DECIMAL Databricks supports DECIMAL types up to 38 digits of precision, but any source column beyond that can cause load errors.
About Elasticsearch Data Integration
Accessing and integrating live data from Elasticsearch has never been easier with CData. Customers rely on CData connectivity to:
- Access both the SQL endpoints and REST endpoints, optimizing connectivity and offering more options when it comes to reading and writing Elasticsearch data.
- Connect to virtually every Elasticsearch instance starting with v2.2 and Open Source Elasticsearch subscriptions.
- Always receive a relevance score for the query results without explicitly requiring the SCORE() function, simplifying access from 3rd party tools and easily seeing how the query results rank in text relevance.
- Search through multiple indices, relying on Elasticsearch to manage and process the query and results instead of the client machine.
Users frequently integrate Elasticsearch data with analytics tools such as Crystal Reports, Power BI, and Excel, and leverage our tools to enable a single, federated access layer to all of their data sources, including Elasticsearch.
For more information on CData's Elasticsearch solutions, check out our Knowledge Base article: CData Elasticsearch Driver Features & Differentiators.
Getting Started
Prerequisites
- Visual Studio 2022
- SQL Server Integration Services Projects extension for Visual Studio 2022
- CData SSIS Components for Databricks
- CData SSIS Components for Elasticsearch
Create the project and add components
-
Open Visual Studio and create a new Integration Services Project.
- Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task.
-
Add a CData Elasticsearch Source control and a CData Databricks Destination control to the data flow task.
Configure the Elasticsearch source
Follow the steps below to specify properties required to connect to Elasticsearch.
-
Double-click the CData Elasticsearch Source to open the source component editor and add a new connection.
-
In the CData Elasticsearch Connection Manager, configure the connection properties, then test and save the connection.
Set the Server and Port connection properties to connect. To authenticate, set the User and Password properties, PKI (public key infrastructure) properties, or both. To use PKI, set the SSLClientCert, SSLClientCertType, SSLClientCertSubject, and SSLClientCertPassword properties.
The data provider uses X-Pack Security for TLS/SSL and authentication. To connect over TLS/SSL, prefix the Server value with 'https://'. Note: TLS/SSL and client authentication must be enabled on X-Pack to use PKI.
Once the data provider is connected, X-Pack will then perform user authentication and grant role permissions based on the realms you have configured.
-
After saving the connection, select "Table or view" and select the table or view to export into Databricks, then close the CData Elasticsearch Source Editor.
Configure the Databricks destination
With the Elasticsearch Source configured, we can configure the Databricks connection and map the columns.
-
Double-click the CData Databricks Destination to open the destination component editor and add a new connection.
-
In the CData Databricks Connection Manager, configure the connection properties, then test and save the connection. To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
Other helpful connection properties
- QueryPassthrough: When this is set to True, queries are passed through directly to Databricks.
- ConvertDateTimetoGMT: When this is set to True, the components will convert date-time values to GMT, instead of the local time of the machine.
- UseUploadApi: Setting this property to true will improve performance if there is a large amount of data in a Bulk INSERT operation.
- UseCloudFetch: This option specifies whether to use CloudFetch to improve query efficiency when the table contains over one million entries.
-
After saving the connection, select a table in the Use a Table menu and in the Action menu, select Insert.
-
On the Column Mappings tab, configure the mappings from the input columns to the destination columns.
Run the project
You can now run the project. After the SSIS Task has finished executing, data from your SQL table will be exported to the chosen table.