Model Context Protocol (MCP) finally gives AI models a way to access the business data needed to make them really useful at work. CData MCP Servers have the depth and performance to make sure AI has access to all of the answers.
Try them now for free →Migrating data from IBM Cloud Object Storage to Databricks using CData SSIS Components.
Easily push IBM Cloud Object Storage data to Databricks using the CData SSIS Tasks for IBM Cloud Object Storage and Databricks.
Databricks is a unified data analytics platform that allows organizations to easily process, analyze, and visualize large amounts of data. It combines data engineering, data science, and machine learning capabilities in a single platform, making it easier for teams to collaborate and derive insights from their data.
The CData SSIS Components enhance SQL Server Integration Services by enabling users to easily import and export data from various sources and destinations.
In this article, we explore the data type mapping considerations when exporting to Databricks and walk through how to migrate IBM Cloud Object Storage data to Databricks using the CData SSIS Components for IBM Cloud Object Storage and Databricks.
Data Type Mapping
| Databricks Schema | CData Schema |
|---|---|
|
int, integer, int32 |
int |
|
smallint, short, int16 |
smallint |
|
double, float, real |
float |
|
date |
date |
|
datetime, timestamp |
datetime |
|
time, timespan |
time |
|
string, varchar |
If length > 4000: nvarchar(max), Otherwise: nvarchar(length) |
|
long, int64, bigint |
bigint |
|
boolean, bool |
tinyint |
|
decimal, numeric |
decimal |
|
uuid |
nvarchar(length) |
|
binary, varbinary, longvarbinary |
binary(1000) or varbinary(max) after SQL Server 2000 |
Special Considerations
- String/VARCHAR: String columns from Databricks can map to different data types depending on the length of the column. If the column length exceeds 4000, then the column is mapped to nvarchar (max). Otherwise, the column is mapped to nvarchar (length).
- DECIMAL Databricks supports DECIMAL types up to 38 digits of precision, but any source column beyond that can cause load errors.
Prerequisites
- Visual Studio 2022
- SQL Server Integration Services Projects extension for Visual Studio 2022
- CData SSIS Components for Databricks
- CData SSIS Components for IBM Cloud Object Storage
Create the project and add components
-
Open Visual Studio and create a new Integration Services Project.
- Add a new Data Flow Task to the Control Flow screen and open the Data Flow Task.
-
Add a CData IBM Cloud Object Storage Source control and a CData Databricks Destination control to the data flow task.
Configure the IBM Cloud Object Storage source
Follow the steps below to specify properties required to connect to IBM Cloud Object Storage.
-
Double-click the CData IBM Cloud Object Storage Source to open the source component editor and add a new connection.
-
In the CData IBM Cloud Object Storage Connection Manager, configure the connection properties, then test and save the connection.
Register a New Instance of Cloud Object Storage
If you do not already have Cloud Object Storage in your IBM Cloud account, follow the procedure below to install an instance of SQL Query in your account:
- Log in to your IBM Cloud account.
- Navigate to the page, choose a name for your instance and click Create. You will be redirected to the instance of Cloud Object Storage you just created.
Connecting using OAuth Authentication
There are certain connection properties you need to set before you can connect. You can obtain these as follows:
API Key
To connect with IBM Cloud Object Storage, you need an API Key. You can obtain this as follows:
- Log in to your IBM Cloud account.
- Navigate to the Platform API Keys page.
- On the middle-right corner click "Create an IBM Cloud API Key" to create a new API Key.
- In the pop-up window, specify the API Key name and click "Create". Note the API Key as you can never access it again from the dashboard.
Cloud Object Storage CRN
If you have multiple accounts, specify the CloudObjectStorageCRN explicitly. To find the appropriate value, you can:
- Query the Services view. This will list your IBM Cloud Object Storage instances along with the CRN for each.
- Locate the CRN directly in IBM Cloud. To do so, navigate to your IBM Cloud Dashboard. In the Resource List, Under Storage, select your Cloud Object Storage resource to get its CRN.
Connecting to Data
You can now set the following to connect to data:
- InitiateOAuth: Set this to GETANDREFRESH. You can use InitiateOAuth to avoid repeating the OAuth exchange and manually setting the OAuthAccessToken.
- ApiKey: Set this to your API key which was noted during setup.
- CloudObjectStorageCRN (Optional): Set this to the cloud object storage CRN you want to work with. While the connector attempts to retrieve this automatically, specifying this explicitly is recommended if you have more than Cloud Object Storage account.
When you connect, the connector completes the OAuth process.
- Extracts the access token and authenticates requests.
- Saves OAuth values in OAuthSettingsLocation to be persisted across connections.
-
After saving the connection, select "Table or view" and select the table or view to export into Databricks, then close the CData IBM Cloud Object Storage Source Editor.
Configure the Databricks destination
With the IBM Cloud Object Storage Source configured, we can configure the Databricks connection and map the columns.
-
Double-click the CData Databricks Destination to open the destination component editor and add a new connection.
-
In the CData Databricks Connection Manager, configure the connection properties, then test and save the connection. To connect to a Databricks cluster, set the properties as described below.
Note: The needed values can be found in your Databricks instance by navigating to Clusters, selecting the desired cluster, and selecting the JDBC/ODBC tab under Advanced Options.
- Server: Set to the Server Hostname of your Databricks cluster.
- HTTPPath: Set to the HTTP Path of your Databricks cluster.
- Token: Set to your personal access token (this value can be obtained by navigating to the User Settings page of your Databricks instance and selecting the Access Tokens tab).
Other helpful connection properties
- QueryPassthrough: When this is set to True, queries are passed through directly to Databricks.
- ConvertDateTimetoGMT: When this is set to True, the components will convert date-time values to GMT, instead of the local time of the machine.
- UseUploadApi: Setting this property to true will improve performance if there is a large amount of data in a Bulk INSERT operation.
- UseCloudFetch: This option specifies whether to use CloudFetch to improve query efficiency when the table contains over one million entries.
-
After saving the connection, select a table in the Use a Table menu and in the Action menu, select Insert.
-
On the Column Mappings tab, configure the mappings from the input columns to the destination columns.
Run the project
You can now run the project. After the SSIS Task has finished executing, data from your SQL table will be exported to the chosen table.