Once in a while, I run across a situation where I need to import data from a spreadsheet into a database.

If the data is to update a single table, this is pretty straight forward. On the other hand, if the data is modeled as a set of tables it gets a bit more messy.

In this case, the data was a customer list with information like address, contacts and phone numbers. The database models this as a series of tables that are associated with a company.

customer_model

What all this means, is that I need to import data into multiple tables, associating each row to be related appropriately.

So I need to create a row in the customer table (which holds the customer name), an address for the customer, and a phone number, as well as an association with the company.

The original spreadsheet is a simple list with name, address and phone for all of the customers for this company.

So the first step is to create a spreadsheet and use the original data to model the tables I’m going to have to update on new tabs:Screen Shot 2014-11-09 at 11.26.03 AMI create an empty tab for the customers table, that contains columns for each of the fields in the customers table:

Screen Shot 2014-11-09 at 11.30.24 AM

On the customers tab, I update the cells that correspond to information that is on the “Data” tab that will need to flow into the customers table (in this case abbreviation, name and account_number). References to the other tab are created in the spreadsheet by entering the name of the tab and the corresponding cell as shown below:

Screen Shot 2014-11-09 at 11.33.55 AM

For the ID column, I could just start by looking at the table, and adding incremental values, but to be safe, it’s a good idea to use a query in order to always add a new value. That way the inserts can be run even while the application is running.

To do that, we need to write a simple SQL query to get the last ID value and increment it by one:

Since this query doesn’t require any calculations, it is simply added as the value for the first column of the customers tab (a similar query will be needed on the other tabs):

Screen Shot 2014-11-09 at 11.45.14 AM

The created_by and modified_by columns are supposed to indicate who made this change, so this can be hard-coded with the ID of the user we want to associate with the change. The corresponding dates can be created with the current date and time (for MySQL this is the “Now()” function.

Screen Shot 2014-11-09 at 11.49.55 AM

Same process is completed for the other tables. Once I have a tab with all of the tables and the basic data populated, I need to add the relationships. While not a perfect solution, I use a query based approach to relate these items based on the data.

Starting with the customers table, I need to populate the customer_address_id with the id from the addresses table. Since I have populated the “area_name” from the customer name, I can write a query that uses the customer name to get the id.

This involves creating a concatenated string that references the name to find that id:

This results in a SQL statement like:

The same approach can be used to figure out the value for the company_phone_id, but in this case the value comes from the Phones tab:

The query ends up being a select that looks for the phone number:

The “LIMIT 1” in both of these queries ensures that this will work even if there are multiple matches (we’ll only link to the first one found).

Finally, I go back through all of the sheets and make sure all of the columns are populated with values, and have the value NULL where I don’t want it to end up with a value (but only on fields that allow NULLs).

Once this is done, I export all of the rows to CSV files (I end up with one file or each table). In Google Spreadsheet, this is as simple as using File/Download as/CSV:Screen Shot 2014-11-09 at 12.35.51 PM

Once I have all of the CSV files, the next step is to build the SQL for the inserts, which is fairly simple to do using MySQL Workbench. Using a model that I’ve reverse engineered from the database, I go to each table in turn, and import the CSV for that table.

In the Model, double click the table, and then go to the inserts tab:

Screen Shot 2014-11-09 at 12.44.39 PM Click on the little icon that looks like a camera in front of a spreadsheet to start the import:

Screen Shot 2014-11-09 at 12.46.33 PM

Choose the file, and click “Open” and I see the values from my spreadsheet:

Screen Shot 2014-11-09 at 12.49.14 PMI delete the header row, since it’s not actual data. To delete a row, use the little icon that looks like a spreadsheet with a minus symbol:

Screen Shot 2014-11-09 at 12.55.06 PM

And click the accept changes icon to save the data in your model:

Screen Shot 2014-11-09 at 12.55.15 PM

Then I export the data to a SQL file by choosing the “Forward Engineer as SQL Script”:

Screen Shot 2014-11-09 at 1.00.43 PMI make sure the “Omit Schema Qualifier” and “Generate INSERT statements” are both checked on the first step of the wizard, and then click “Continue”:

Screen Shot 2014-11-09 at 1.02.24 PMOn the next step of the wizard, I make sure only the tables I care about are selected. To do so, I click on the “Show Filter”:

Screen Shot 2014-11-09 at 1.06.19 PMIn this case I just exclude the companies table, since I’m not actually inserting into that one:

Screen Shot 2014-11-09 at 1.07.32 PM

Clicking continue, the generated SQL script shows up, I scroll down to make sure the insert statements are included (table generation SQL will be deleted later):

Screen Shot 2014-11-09 at 1.09.26 PMThe next step is to make some edits that are essential to the SQL actually doing what I need it to. I open the SQL script in NetBeans (my editor of choice) and delete the table creation steps:

Screen Shot 2014-11-09 at 1.14.18 PMThe first edit I make is to fix some of the values to be what I wanted them to be instead of what Workbench generated. The NULL and Now() values get placed inside quotes, so I just replace them using a global find and replace. I’ve also had to do some other fixes such as escaping a single quote (for names like “O’Brien”):

Screen Shot 2014-11-09 at 1.17.35 PM

Once the edits are done, I open a connection to the database in MySQL Workbench and open the SQL script to run it (against my test database). This will show you any errors in the SQL that you might have to go back and fix. Generally I try to make the fixes in the source spreadsheet, but sometimes I have to do the edits in NetBeans as I did above.

Screen Shot 2014-11-09 at 1.31.17 PM

 

Executing the script inserts the data, and builds the relationships. No errors indicates that the data was installed to the database as expected:

Screen Shot 2014-11-09 at 1.35.35 PM

And now the customers are in the database ready for my application to reference.

Simple 🙂