You can use your favorite social network to register or link an existing account:
Or use your email address to register without a social network:
Sign in with these social networks:
Or enter your username and password
Forgot your password?
Yes, please link my existing account with for quick, secure access.
No, I would like to create a new account with my profile information.
A couple of months ago, we posted a blog entry on how to use
2010 with SQL Azure.
At that time, the easiest way to migrate your Access data to SQL Azure was to first migrate the tables and data to a non-Azure SQL Server then to use SQL Server tools to migrate these items to a SQL Azure database.
Today, we are pleased to let you know that the SQL Server team has released a new version of the SQL Server Migration Assistant (SSMA) for Access that will allow you to migrate your data and schema directly from Access to SQL Azure, as well as SQL Server 2005, 2008 and 2008 R2. Just like with SQL Server, you can use your Access 2010 application as the front-end to your SQL Azure database.
More details on the release are available on the SQL Azure Team Blog and Zane
Adam’s Blog , or you can download SSMA
for Access directly from the Microsoft Download Center.
Thanks Russell, this new migration tool will be very helpful. Most of the time we use sql server for data storage from the start, but being able to migrate directly from an Access data store to sql azure will fit in well with agile development scenarios.
Russel: 1) will this be in any kind the path for the new access connection to sql?
2) will this type of connection will be faster than actual linked tables to sql? I think access developers are long due to a very fast / native connection to sql specially with todays new technology, we are still using the same connection we were using in 2000. Edwin
Edwin I'm just another Access dev like you. But I think the answer to your two questions is 'no'. Access connecting to sql azure is just odbc, afaik, and if I understood your questions properly. You're completely right that without a new and robust interface to sql server, Access is a bit like a car with only three wheels. The Access team has worked hard to get Sharepoint and Access to work well together and that's really good news. But sql server, in the cloud or not, is the most most important and most natural data store for serious application development with Access. It'd be good to hear something from the Access team re moving forward in this area.
Edwin/Michael FWIW - even though Access may use ODBC to connect to SQL Server, AIUI, SQL Server implements ODBC natively (e.g. not just a wrapper atop its native library). There may be a difference in terms of functionality between say, ADO.NET provider and ODBC provider but the performance should be very similar between two because both goes direct to the source. Now, I also have to agree 100% that we really need a good replacement for ADO - DAO is quite great layer and works with ACE very well, but I've quite liked ADO's flexibility which is largely present in ADO.NET as well. Would be very nice to see ADO.NET providers supported in Access, I would think. Though one can do great things with ADO inside Access, I've found it to have too many quirks and I wouldn't really even want ADO.NET if it likewise suffered similar quirks and gotchas. But right now, ADO is only way to get for example, disconnected recordsets and asynchronous processing, not supported by DAO. So yes, we do need something to replace the functionalities that ADO bought to the table to Access.
First of all, i need to SAY i love Access 2010, 2007 was good, but 2010 is better than ever, it looks great and it works great. THANKS ACCESS TEAM. . . Well, that being said; I will like to get Russell opinion on this. I have an Access Front-End to a SQL BackEnd, (www.easypayroll.net) we have different modules, Payroll, Human Resources and Contract Billing all in Access. We have also other modules Punch Manager, Punch Reader, Time and Attendance, Scheduling built in .net Technology because of the solution needs. I have to say, one Access Guy can be as faster as 5 to 10 vb.Net guys, Access is easier to deploy and maintain, and is much better integrated. BUT with links to SQL is getting really slow in large loops and iterations, we really need a New Access to SQL Connection, we do not need to modify tables from the access side, we have SQL Management Studio for those tasks. What we really need is faster read and write, disconected recordsets, faster queries and loops, i think a native connection to SQL is Long due. . Edwin
@ Edmin wrote:
What we really need is faster read and write ==== C#, FoxPro, Access, does not make any difference to SQL server here, and the network speeds going to be the same in all cases. At the end of the day if you pull 10 records off the server using ODBC, oleDB or ADO.net, it's all the same. There's no some kind of magical silver bullet here that speed things up. The read + write speed will NOT change. For any type of record updating that process more then one record, then you need to use a stored procedure on the server side, and once again SQL server don't care if it's FoxPro, access, or VB.net making that call, and once again, the speed going to be the same here no matter what client you use (native or odbc). I think most of us would like to see some improvements in how we link to SQL server. So, having one connection string for all tables, or at least having some type of connection management area where we can add + manage our connections (much like a DSN, but managed inside of access). So, we would then point the linked tables to the connection object. This would make re-linking real easy (only change one object string). This type of setup would also be better to allow hidden or encrypted passwords for sql server. We can have disconnected record sets now with ADO. However, I would like to see ability with DAO. There was a demo at the last PDC conference with access doing as such with SQL on azure that was really much like the new disconnected record set model we have when linking to SharePoint. The off line mode for Access and SharePoint linked tables really very nice now. However, I might be missing what you asking for. You might want to expand a little bit on a typical type of code you're talking about that would all of the sudden magically speed up Access as compared to those other technologies. In fact, I am not aware of such an scenario. There might be some potential batch update scenarios you can come up with, but assuming we're not talking about a disconnected record sets, then you have to provide some specific examples and demonstrate to me that the access performed so much slower in real practice then some other technology on the same network connection doing the same job. I have not seen a performance difference in real world practice. A native connection will not magically fix a poorly designed application that's pulling too much stuff over the network, and the speed of that data being dish out by sql server does not change when using odbc or some other system. In most cases virtually ALL OF the competing technologies will use about the same bandwidth and all perform record updates at the same speed. There is nothing wrong at all with the underlying ODBC technology we have in terms of how well it performs. Albert D. Kallal (Access MVP)
Edmonton, Alberta Canada
Thanks all for the responses. Edwin, Michael is correct when he says that we use ODBC to connect to SQL Azure. You will still need to use linked tables over ODBC using the new SQL Native Client (SNAC) 10.0 driver available with SQL 2008 R2. SQL Azure does not support traditional ADO connections. I'm interested in understanding what you mean when you say we need a "native" connection to SQL Server. The team has heard this many times from Access developers but it's unclear as to what is not considered native about the SQL ODBC drivers. The issues you encounter with iterations through large datasets on SQL Server data may be mitigated by working on smaller datasets when connected to remote data. This is a common problem with working with data located elsewhere on a LAN, WAN or the internet. One potential solution is to reduce the amount of data you are working with in order to speed up the operation. Other solutions such as running action queries or using pass-through queries to execute stored procedures on SQL Server might also help. Can you describe your specific scenario?
Access is an outstanding product, and its power is not correctly appreciated--it is by far the best product out there for fast database app development, and does a brilliant job seamlessly using heterogenous data sources (jet, excel, sql server). However I'd add a few points on improvements:
1. There is no entity framework provider for access/jet.
2. There is no native ado.net provider for access/jet (it goes through com/oledb). Presumably this is the result of limited resources when sql server or sql server express for the backend is strongly encouraged anyway, but this is an issue for those who still have to remain with a mixed access/sql environment (including those of us migrating but who need to have new code refer to the old access datasource until the entire code or switch is complete). (I've also noticed the newest tools don't or are late to support access jet/ace files compared to sql server, or run into configuration issues or bugs, e.g. linked server support for access database files: social.msdn.microsoft.com/.../0056bb2d-cb34-4706-bfc2-67df2ce43167
) As far as Access as frontend, the speed with odbc to sql server is good -- I wonder if there could still be improvement on non-networked data access (retrieving sql server or sql server express data stored locally on the client pc)--here it seems that Access is much faster at retrieving data from large LOCAL jet/ace tables than from large LOCAL sql server tables. Thanks.
If I'd said 'native' (I can't speak for Edwin), I might mean two things. #1, the kind of intimate integration between Access and SQL Server that ADPs have/had. #2 A better way to connect than the current odbc. It kind of does the job but compared to what ADO was starting to offer...and it's pretty bad at messaging re connection status and so forth. It is technology built for client server over the LAN, not across the internet.
Russell, You asked a great question. Thinking about it some, I am starting to wonder if the perceived difference in speed could be attributable to additional control over how we retrieve data. To provide an example, if we used an asynchronous fetch to open a recordset in an Access form, it would certainly *looks* to be faster and responsive than if it was synchronous. We have the capability right now with ADO, but that typically meant two things: 1) Additional VBA coding - there is no way to do this via UI in much like sense as we can choose between "Snapshot" or "Dynaset" by choosing the appropriate value in form's property sheet. 2) Coping with gotchas and quirks of ADO - though ADO brings a lot to the table, there are few things that has to be worked with/around. An example is the inability to use ADO for a master-detail form w/ subform in *DB files; subform would have to be manually managed. So, I suspect that if Access was enhanced to work with asynchronous fetching/execution as well providing additional controls/events for handling multiple threads and reacting accordingly to the events (as is available with a ADO recordset), it can be put to good use in making the application appear to be responsive and fast. We already are seeing this with web database; create a form with a subform and a combobox, and we can see that the subform and combobox will load separately and user is free to interact with the rest of form such as textbox without having to wait for those to load. That's not the case with traditional Access database where a complex form could easily frustrate user being forced to wait for everything to load. Another great feature that ADO brings to the table is the ability to use what would otherwise be non-updatable SQL statements or even stored procedures as a recordsource for the form. In ADO.NET, all SQL statements are basically non-updatable but developer can control how modifications are made by controlling the InsertCommand, UpdateCommand and DeleteCommand. With Access, we are bound to the updatability rule based on Access' ability to guess the correct SQL to send back to the backend and thus all passthrough queries are non-updatable. We are then left between a hard place and a rock; a form (with subforms) bound to only one table or an unbound form where we do everything. The ability to control the SQL that Access sends back in response to user's modification would allow for more granulated approach, be done via setting some property associated with the form or recordset without having to wading chest-deep in VBA (as is the present case when using ADO recordsets). Another point is how Access has handled broken network connection. A drop in network connection, even a brief one, usually would cause the dreaded "Disk or Network error" which is untrappable and irrecoverable. Not a pretty situation to be in! If I've understood the technologies correct, this is not a ODBC issue per se but rather Access' due to how Access seems to insist on having an active connection at all whereas I would believe that most modern applications would operate in a disconnected fashion by default. If Access was to operate in a disconnected fashion, this is much easier to cope and recover from. We're already seeing vesiage of this with Access Services. Let's see it in the traditional Access, too. So all in all, when I see people talking about "native" and needing to speed up, I suspect they are comparing the performance/functionality to what is observed in SSMS/EM or full-blown front-end developed in WinForms or something. In both cases, the network speed and performance across the technologies are identical; the difference is only in *perceived* (and they say perception is reality after all) due to allowing fetch & updates to process asynchronously without blocking the user. I hope this helps answer your question.