You can use your favorite social network to register or link an existing account:
Or use your email address to register without a social network:
Sign in with these social networks:
Or enter your username and password
Forgot your password?
Yes, please link my existing account with for quick, secure access.
No, I would like to create a new account with my profile information.
I think it's safe to say that anyone who's worked with really large Access databases (like, in excess of 50 or 100 megabytes) has encountered slow compact and repair cycles. My favorite anecdote in this regard is a very large database managed by an old buddy of mine. His database has several large tables containing more than 20 million rows each. When it's time to compact and repair, he starts it up on Friday before leaving work, and the operation is usually complete by Monday morning.
Not to worry! What Steve really needs to do is break up his humongous ACCDB file into multiple ACCDBs, each containing just one of his big tables, then link the individual database files to a central database. The central database, of course, would contain the forms, reports, queries, and other database. He'll find that compacting the constituent databases (as sequential operations) goes much faster than compacting the one big database.
Also, one advantage of breaking big databases up into smaller pieces is that, in the event of database corruption, the corruption will affect only one portion of the total database. Presumably, the database system is easily restored by replacing the corrupted database from a backup.
Several large tables of 20+ million rows each ????
They must not have had many fields....I had a db at one time with one table with approx. 6 million rows and it was 1.9gb...and I only use that table for some minor (slow)lookups.
Nice tip! Another suggestion is that if you have your database files on a network server, copy them over to your local machine, run compact and compare, then copy the smaller database back. There is far less network i/o required overall.
Unfortunately you cannot define referential integrity for references between tables that are not in the same database.
I agree with Andre. Databases without referential integrity are poor database design (and bad to work with I might add), except when there are no relationships between the tables. I copy data locally as Brandon has suggested, compact it, then overwrite the data back up on the network... in code. This speeds things up dramatically.
I have used Michael's suggestion for years and it has worked well for me. I also used VBA code to drop indexes before compacting and then add indexes back after it completed. Seemed to speed it up a lot. I wonder if the same would hold true for validation rules. Hmm?
>They must not have had many fields.... A typical contact name is about 110 characters. That works out to about 18 million names (rows). A typical invoice “line detail” such as quantity, price, amount, product id tends to be about 35 to 40 characters per row. At 40 chars per row (including field byte overheads) you get about 50 million rows. I think in actual practice the number of records you fit will be lower since one would have a good number of indexes etc. So, it is not that uncommon to see tables these days with millions of rows. The idea and tip of moving some tables out to separate backend mdb files is great way to really extend the limits of a JET/ACE based access applications. >Databases without referential integrity are poor database design That is true. However, while you do lose referential integrity between the separate back ends, often it’s not such a problem. For example you might have a historic invoice file that has all invoices older then five years. In this case for tax reasons, or other issues, you never going to delete these invoices anyway. Furthermore the invoice table and the invoice details table in that separate back end can still use referential integrity for operations between those tables in that single back end. While copying the file from the back end server to your local machine to compact and repair is MUCH faster then compacting over the network, an even better tip here is to simply schedule a compact and repair on the actual server itself. You do not need to have to have ms-access installed on the server to do this. Most editions of windows including windows 2003 server and 2008 do include JET. And the ACE data engine for 2007 is also available as a separate download. These windows scripts can be run on the server. I explain how to write these windows scripts that can run your access code at night here: www.members.shaw.ca/.../Index.html Albert D. Kallal
Edmonton, Alberta Canada
In addition to what others have brought up, I would reject the notion of database corruption being easily recovered from in this scenario. All databases would have to have been backed up simultaneously, or with some other guaranteed "no changes" scenario in order to ensure data integrity between the databases. Also, while Albert's suggestion of compacting directly on the server is a good idea, it pre-supposes that you have suitable access to the server, which is often not the case in a corporate scenario.
With the advent of SQL Server Express I find my backends less and less in Access, especially something this size.
Aside from the issue of referencial intergrity, data inconsistency could also result from loss of data contained in other tables e.g. lossing the departments table which has been used in payroll procesiing.
I'm glad to see all the comments indicating the problems with this solution as well as alternatives. In my not so humble opinion they should've been included in the original posting.
Andre Minhorst wrote: "Unfortunately you cannot define referential integrity for references between tables that are not in the same database. " Philbo wrote: "Databases without referential integrity are poor database design" The above are responses to the tip to break big databases up into smaller pieces. Doing so eliminates the ability to use referential integrity, and hence the responses. But there are alternatives to referential integrity that can be used after breaking the big database into smaller pieces. You can use code with transactions that updates the dispersed databases, based on user actions. Its good to have this option. Thanks for posting this idea. Gilad
@Gilad: > You can use code with transactions that updates > the dispersed databases, based on user actions. True, however, consider this past newsgroup posting shown below, from the very knowledgeable Michael "Michka" Kaplan. Michael is a former member of the Access Development Team and a current Microsoft employee. Tom Wickerath
Microsoft Access MVP Reference: See posting # 15 here:
groups.google.com/.../db33ab980d1b0ef7 Well, to this (and responsed by Andy and others) I have three different responses: 1) The fact that you do not have enforced relationships at the engine level actually means you may well have "lost" data. If your queries are designed well they will never include the bad data, and thus everything will work well as far as anyone can tell. Your own design skills work against you in this case. 2) If you take the trouble to design everything as you have then you are actually going through a lot of effort to duplicate the functionality provided by Jet. There is nothing wrong with that, except (a) perhaps you have bugs and obviously Jet gets the testing of over a million customers and (b) your code can hardly save on resources since VBA code is less performant than engine level validation in Jet. 3) The experience of an individual developer nonwithstanding, is it truly your contention that an "average" user of Access (who is not a developer in most cases) is really going to be able to duplicate the functionality of referential integrity in VBA code? And do you truly think that it is therefore good advice to give in a newsgroup such as this without even a caveat explaining how much experience it takes to get away with such an approach? -- MichKa [MS]
Now I know how a little of what Fabian Pascal felt all those years ago on the CompuServe MSACCESS forum. Without engine level enforced RI you have corruption, or at best no way to stop the possibility of corruption. Application level code will not stop data inconsistency (CORRUPTION) it needs to be at the Database engine. If you do not care about your data then the post has some value, otherwise it is sadly what one has come to expect.
Splitting the database works best if all of the data is not needed at the same time. One of my systems used a seperate MDB for each day. The date was part of the name xxxYYYYMMDD.mdb. The system linked the current day's tables as Today_name and the previous day as Prior_name at startup. All of the logic was in the control database. The user could select any two dates and thus run comparison reports for changes over one day, one week, one month, one year or anything in between. Referential integrity was maintained between the control tables and the ones in the xxxYYYYMMDD.mdb's. However, each day's data was independent of other days so referential integrity between tables in different xxxYYYYMMDD.mdb's was not an issue.