This article is part of a series of posts from Xpert360 Development Team to address real world scenarios and problems faced when designing, creating and implementing Visual Studio LightSwitch applications.
Freedom from the chains of legacy databases
Connecting to disparate data sources is one of the key strengths of Visual Studio LightSwitch. With so much data and so many data sources to consume the possibilities seem endless.
LightSwitch’s attach data source wizard has built-in connectivity to four main types of providers: Database, SharePoint, OData Service, WCF RIA Service.
It does come with a few real and perceived limitations as far as implementation and consumption of data are concerned, but are these limitations just self-imposed? Perhaps they are not as real as they seem.
Big Database, Big Data, Big Problems
By definition if a database exists then it is a legacy database. Yes, even that shiny new one you created yesterday. They come in all sizes, all shapes and comply to various degrees with established design patterns (or not!).
Some real and perceived difficulties:
- My database is complex and has got hundreds of tables
- The database schema design does not conform to standards
- My tables have hundreds of columns
- Performance in LightSwitch design and runtime is awful
- It is no use as I am not allowed to change the database
- I only want a few bits of data
- LightSwitch will not import our database metadata
You can create your own WCF RIA data service to address some of these concerns but there is another approach that you can try without the need to write any code.
Design custom logical schemas for use by LightSwitch
You can use this technique to great effect with surprisingly fast results, true to the principles of LightSwitch RAD. It does not answer all concerns and problems, in particular, poorly designed databases, but it is a technique well worth considering and adding to your LightSwitch tool bag.
You (or a colleague) will need some basic T-SQL skills to successfully use this technique so you can rule yourself in or out. If you are out then it really is the time to get some basic T-SQL skills.
If you shape your data source and data in a way that is more LightSwitch friendly then you will have a better experience consuming it within your LightSwitch applications.
To cut a long story short let us see an example
Here is a simple but often real scenario
- Legacy CRM database
- 200+ tables
- 10,000+ columns
- We just need a list of contacts with a subset of the columns
Conceptually we need to map the data across to our data view for LightSwitch.
The whole process to achieve this and create a LightSwitch HTML application to consume it took less than 5 minutes. So now we have:
We did some T-SQL script generation, we made some simple name changes, but we did not write a line of code in LightSwitch!
It works! The legacy database contained unsupported data types that caused LightSwitch to fail to import the metadata at all before.
It performs well at design-time as the big, nasty database schema is no more.
It performs well at runtime as we only transfer a subset of the contact list data that we need.