I guess I was too vague in my description of the large dataset. I do have normalized data tables, these large tables are bond amortization tables. Some of the bonds are two year durations, so they may have 8 fields (4 per year for two years), others will be as large as 200 field (50 year duration). So my table may have as many as 200 fields for these numbers. The data is never the same between records. This is why I have the question, is there an efficient way to design such a table or is that the only way?