One of the consulting projects I worked on was for the finished product exports department at a global consumer products company. The company produces products that get shipped all around the world, but this particular group concentrated on North America.
Because of the North American Free Trade Agreement (NAFTA), the company was able to save millions of dollars each year by taking advantage of tariff exemptions for products shipped among the U.S., Canada, and Mexico. Prior to my joining the project, the company had discovered from their legal department that they needed to provide documentation that these products were indeed exempt.
The regulations describe raw materials that compose the finished products, so we needed to look at the constituent level for every product the company manufactures. I was informed that the company had close to 3,000 Excel spreadsheets for various products that listed the composition of each. Additional information would be forthcoming in the near future after I had started on the project.
I looked at the Excel files and discovered that the composition was not in a flat layout (headings and rows), and thus, I wouldn't readily be able to import the information into a database table. I would need to manipulate the data into such a layout and then import it into a database.
Instead of going brute force into all 3000 of these files, I opted to automate the process. I decided to build a conversion tool in Excel that would search for the relevant contents of each file and copy and paste them into a new spreadsheet in a table layout. I wrote VBA code in a standalone Excel workbook to do the job. In addition, I also wrote code to record errors and other anomalies in a log.
I also wrote some VBA code in Microsoft Access to launch the Data Conversion tool in Excel, along with some use of the Microsoft Office object interface. I put all the directories where the files were located into a Lookup table and had the program go through each directory, performing the conversion on each Excel file, importing each Excel spreadsheet's information to a master database table.
I wound up being able to click a button and watch as the program performed the conversion. I had set up a flag in the Lookup table to indicate whether files were imported successfully. Then I would check the log in Excel for anomalies, handle them appropriately and retry.
I'll share more about this project in future updates.
Welcome to my online portfolio, the complement/substitute for my resume. The opinions included herein are my own and do not reflect those of any client or employer, past or present. Please check out the new site: http://danieljohnsonjr.com
- 401(k) report (7)
- accounting (1)
- administrivia (11)
- ADO (1)
- announcements (6)
- audio (1)
- batch files (1)
- benefits audit (2)
- blogging (8)
- branding (7)
- bridge application (17)
- business intelligence (7)
- cincinnati (6)
- client relations (4)
- community (1)
- competitive intelligence (2)
- conference (3)
- conversions (2)
- dashboard (3)
- data warehousing (1)
- design (4)
- documentation (2)
- financial reporting application (3)
- first post (1)
- get that job blog (2)
- global consumer products company (3)
- idea generation (2)
- integration manager (2)
- jazzmania productions (1)
- lessons learned (3)
- meetup (9)
- mentoring (3)
- microsoft great plains (5)
- music (2)
- nafta (1)
- networking (2)
- new media (22)
- payroll (6)
- peo (33)
- podcamp (3)
- podcampohio (2)
- podcasting (8)
- presentation (1)
- programming (31)
- public relations (1)
- public speaking (1)
- questions (1)
- requirements (4)
- search tool (3)
- social media networks (7)
- SQL (9)
- testing (1)
- training (1)
- troubleshooting (7)
- trucking (2)
- twitter (9)
- uml (1)
- VB.NET (2)
- VBA (1)
- wikis (1)
Tuesday, January 30, 2007
Mining data from 3,000 separate Excel files
Posted by Daniel at 10:07 PM
Topics: global consumer products company, nafta, programming
Subscribe to:
Post Comments (Atom)
No comments:
Post a Comment