Your .NET and Microsoft technologies specialist in Western Switzerland RSS 2.0
# Friday, 14 November 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Today, we can extract 7 major trends in the software development process. First, the search becoming a lot more important. Indeed, searching for files, emails and finding a way to organize the information is now crucial. It is the same when we write code. Then, a new user-experience is coming, using new paradigm in the way the development tools are proposed (Rich User Interface). In terms of agility, there is a need for Intellisense and Quick Info. More and more, the develompent is done in a declarative way that allows the develompent tool to do all the plumbing for us. The three remaining trends are the support for legacy code, the Cloud that influence the next steps to adapt Visual Studio, and the concurrency.
During her first demo, Karen Liu showed us the new functionnalities of the QuickSearch that now works accross languages (C#, VB.NET, etc) for types. It offers also a "search as I type" functionnality. It can also be used to search for a file.
With Visual Studio 2010, when selecting a symbol, all references to it are highlighted. But it is done only for the one that have the same signature or type. The user interface, written in WPF, can easily be extended and Karen showed us how all references to a symbol present in a file can be displayed in the margin of Visual Studio.
Adding unit tests is simpler and handled directly by the user interface with an automatic generation of the classes. The Intellisense has now a "filter as you type" feature that speeds up the code writing.
Arriving on a running project can be difficult. Even more when a lot of code is present. The calls dependency is a new feature that allows the developer to see what the code is calling, and which part of the code is calling the one selected.
Another great feature is the historical debugger. Imagine that the runtime hits a breakpoint, it is now possible to come back in the code and to execute step by step the program. In other words, it is a kind of replay. The Functions and Diagnostics Event allows to see what are the events that occured and also the exceptions raised, being caught or not. It is also possible now to record the execution of a program in order to send this to someone to reproduce the scenario.
This session was the last of a great TechEd. Not a lot of things were announced, but the content was interesting and technically advanced for some of them.
Next year, the TechEd will take place in Berlin between the 2nd and the 6th of November 2009 at the Messe Berlin (Germany). Hope to see you there.

Friday, 14 November 2008 22:48:57 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Udi Dahan explains us that today, a lot of books on patterns and practices are existing on the market. But reading them and knowing them by heart does not help if we don't design the application with flexibility in mind. He takes the examples of the Visitor and the Strategy patterns that could easily be overused by architects leading to a collaps of the application structure. The goal here is not the have an absolute flexibility, but to have flexibility where it makes sense and where it is needed. The same phenomenon occured with the use of hooks.
So, Mr. Dahan tells us that we should make the roles explicit and by implementing them with interfaces. Before, when we had an entity that needed to be validated, we implemented a .Validate() method in the entity. That made sense, because only that entity was able to know how to validate itself. But, what happends if in the entity, another one is linked that, in turns, need to be validated. It could be fine if the call sequence was always the same. If it is not, then the trouble comes. Then, the goal is to identify the roles and to make them as interfaces. So, in the case of a validator, an entity templated ("<T entity>") interface should be created and a specific entity validator implementing this interface should be implemented as well, so that a Service Locator can return such entity that will be called for validation.
It is also possible to use that same pattern to differentiate the roles that can be applied to a same entity. Mr. Dahan uses that same pattern to implement message handler.

Friday, 14 November 2008 22:47:18 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

The story of Metropolis has been running since a while now and the goal of Metropolis was to draw a parallel between Cities, Transportation, Manufacturing and Applications.
Pat Helland compares IT Shops with Cities, Manufactured Goods with Structured Data and Operations, Retail with Business Processes and Factories and Buildings with Applications. He starts by telling that initially, buildings contained people and stuff, but evolved into a model where bringing in and out stuff and people in addition to connecting them became more and more important. The same with applications where traditionnaly, they were built to interface with people, containing data and performing operations. This is changing in the sense that it integrates more the personal view of work. Moreover, the dissemination of the knowledge increases and the tendancy to directly perform cooperative operations increases as well.
Pat makes the distinction between three kind of buildings and applications : High Road, Low Road and No Road.
On the building side, we can disringuish a High Road one by its facilities to evolve little by little gaining character. They receive investment and they will be adding new extensions or wings. It is the same with High Road Applications, that are typically Line-of-Business applications, requiring a very high availability. We can also call them packaged applications.
The Low Road buildings have a lower cost, but they have no-style and a high turnover. Most often, its inexpensive constructions. On the other side, they are highly adaptable. For applications, we can compare this model to application built by end users, without the need of the IT department. And if the application is no more useful, it can be thrown away. Typically, Excel spreadsheets, Access databases or even SharePoint are considered as Low Road applications by Pat Helland.
Then, it is presented the shearing layers for buildings, such as Stuff, Space Plan, Building Infrastructure, Skin, Structure and Site. Everyone of these layers have their own lifetime, from 60 years for the structure to 5 years for the Space Plan. Again, the same parallel can be done for applications.
The goal, and the conclusion of the presentation, is to leverage the middleware and to build applications for reuse, in order to reduce their costs. This can be done by asking two questions : Who Makes Money ? and Who Saves Money ?
In terms of application component reuse, today, there is not yet a market place. On the Middleware side, the vendors make money and the users save money at the condition that they have to work on a SOA Middleware. For service reuse, it is non-existent today. Moreover, there is a need to standarize on schemas, contracts and branding. Finally, applications are evolving to become services and they should be designed for change.

Friday, 14 November 2008 22:45:07 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Basically, S+S is about externalizing services like we have done with the power. Instead of having every home producing power, we have now plants that are producting it, and home connecting to the grid to get what they need. It is more or less the same with the Cloud. Someone is hosting the resources for you, no need to care about scalability, failover and so on, letting you concentrate on the development of your application. This allows also to publish your own services into the Cloud, making them available for others. A parallel can be done with the transport systems :
A car corresponds to the on-premises infrastructure. But it has a maintenance cost and needs to be fixed or patched.
Car renting is better and is like hosting.
On its side, the train is equivalent to the Cloud. No need to care about the maintenance at all, but, the downside is that you can not change the schedule or where it goes.
So, when looking for the Cloud, it is looking for availability, scaling, but you have no control on it. It also means that the Cloud is not the silver bullet and is not for everything.
To manage the identity, the .NET Services (one of the Cloud services) relies on external authorities. The enterprise defines the identities and roles and build a trust relationship with an external authority that will be trust by the Cloud. It means also that .NET Services needs to support several technologies.
It leads to, at least, two challenges : to focus on the use of SOA, and on resources decentralization.
To support the ID management, Cloud is using tools such as claims, tokens and Security Token Services.
Finally, to control the identities on .NET Services, there is a MMC to manage them.

Friday, 14 November 2008 22:43:05 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Thursday, 13 November 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Mike Flasco starts by executing an application that downloads the slides from the cloud.....
The goal of the presentation is to explain what are the Data Services proposed by Microsoft in the Cloud. With Data Services, not only the entities are stored, but also the relationships as links. Moreover, it directly supports the paging or direct query.
The HTTP verb controls the operations done on the data : GET for SELECT, POST for INSERT and DELETE for....DELETE.
It means that every resources is accessible through a URI.
to setup a model in Data Services you need :
1.- Create the data model in Visual Studio, generating a edmx file
2.- Create a .svc service inheriting from DataService<DataModel>
3.- Setting the access rights
4.- Call the service => That's it !
Entity exposition can be set per entity (read only, read/write, all)
In order to support row level security, it is possible to implement interceptors that are executed before the GET, PUT, etc.
When doing a LINQ request, under the cover, it sends a URI request to the Data Services and the same thing happens then we use the object model.

Thursday, 13 November 2008 20:50:05 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Search Server 2008 is a new brand for SharePoint Server 2007 for Search.
Among other things, it has a simplified install, an enhanced administration interface, new federation capabilities, some performance and stability enhancements and does not have document limits.
Search Server 2008 coms in two version : Search Server 2008, which allows a distributed topology and licensed per server, and Search Server 2008 Express for a single server install only, which is free. In terms of features, there is no difference at all.
Michal Gideoni explains when we need to customize the Search. Basically, every time the user interface needs to be modified, the custmozation can be put inplace, and starts with a demo on how to create a new search tab.
To customize the user interface, most of the modifications can be done through the modification of the XSLT generating the HTML, but it is important to know than the Search Web Parts are sealed so not inheritable.
The search supports two types of queries : based on keywords or using
SQL statements. But both of them are using the same flow.
The Search Query web service is useful for remote applications or is accessible from the Office Research Pane and allows the two methods for queries.
The federation is a way to get the search results from other search engine. It sends the query to them, received the result and transforms it to format the output. The Federation Web Part can only be connected to a simple federated search location defined in MOSS and supports synchronous or asynchronous rendering.
Currently, the Federation Web Part supports two types of locations : OpenSearch 1.0/1.1 that must return XML results or even simpler ATOM or RSS/RDF feeds, and Local Search Index. In case OpenSearch is not available or the result is not in XML, it is possible to implement a custom connector that will wrap the result and generate the XML needed by the web part. To do that, it is needed to :
1.- Create an aspx page that take a parametrized URL
2.- Connect to the federated search engine and generate an RSS feed
3.- In the Federation Web Part, connects to that aspx page.

Thursday, 13 November 2008 20:48:54 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Todays, applications are facing mainly three challenges : distributed work, code coordination leading to complexity, and, finally, the management and the tracking of these applications is not trivial.
The goal of a WF program is to coordinate work, but the problem with such applications it that it needs a hosting process to run. According to Jon Flanders, the hosting code is taking most of the development time. Moreover, this hosting process code is not the most interesting one to write.
The aim of Dublin is to be the host process by choice of WF applications and this platform is coming with two main applications : Visual Studio Designed using WPF, and an improved version of Visual Studio Debugger. Whereas with the .NET framework 3.5 two execution models are available, State Machine and Sequential, WF 4.0 comes with a third one : Flowchart. This execution model allows to design workflows with branches, loop that are more complex that with sequential workflows.
Another problem with 3.5 is that parallel activities were not running in separated threads, meaning that a pseudo parallelism was in place. WF4.0 introduces a true parallelism and the runtime is able to coordinate multiple activities at the same time.
A little comment from Jon says that code activity should not be used, because it hides the purpose of the workflow whereas a goal of a workflow should be understandable by looking at the design itself.
WF 4.0 introduces compensation through a dedicated activity (Compensable Activity) and also two types of correlation : context based or content based (like BizTalk). The former one is only working in Request-Response scenario. The latest, on its side, one-way communication can also be used.
Finally, Dublin supports tracking and configuration through dedicated consoles applications.

Thursday, 13 November 2008 20:47:48 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

This general session was about how to reduce cost and carbon by sharing the infrastructure. Pat Helland also showed how the new Microsoft DataCenter will be build in Chicago and the new concept of containers for the servers. This datacenter will contain 100000 servers compartimented in 50 separated containers.


Thursday, 13 November 2008 20:46:28 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Ishai Sagi starts by explaining what is a Field Type.
A custom field is needed when custom validation, custom controls, accessing a data sources or new properties are needed. For example, to create a phone field.
The downside is that a custom field does not inherit everything from the parent field. Moreover, it does not integrate with Office. In other words, it is not displayed in the Office fields bar. In the datasheet view, the custom controls are read-only and they are not included when you export a list to Excel. Finally, a complex data structure may be hard to integrate into filters, grouping, search and querying.
Every custom field is composed by three components : and XML definition file, a class library and a field class.
Among the three most common overrides there are : DefaultValue to dynamically assign a default value based on the user context for example, GetValidateString that returns the value that will be stored in SharePoint, and finally, FieldRenderingControl for a custom User Interface.
After two demos showing how to create a simple custom field and the second showing a data validation custom field, Ishai advises that when developing a custom field, we should always think that it could be used as a site column.
The Rendering Pattern allows to define how a custom control is rendered. It does not require any compiled code and, instead, is defined in a XML definition file.

Thursday, 13 November 2008 20:43:46 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Andrew Connell, MOSS MVP, starts by exposing some of the actual problems we can encounter when deploying a MOSS publishing site. First, when browsing such site, IE displays a warning message saying that the Name ActiveX must be trusted and installed on the client machine in order to see the web site, which is really not ideal. For that, there is a workaround explained in the MS KB 931509.
The second problem is how to minimize the page payload due, mainly to the table layout used in SharePoint. A first recommendation would be to use a CSS based design. It will be even more efficient because the CSS files are cached. Then, enabling the HTTP compression reduces by 71% the amount of data transfered. And finally, on a standard SharePoint page, Andrew showed us that 26% of the payload is used to transfer the core.js file (54K), which is used only for the Site Actions menu. The branding and content consumes more or less 45% of the payload, while the other scripts and CSS files 29%. The problem is that core.js is not used for internet facing applications, but if it is suppressed, we are in a non MS supported scenario. So, rather than putting core.js at the top of the master page, it is better to load it asynchronously. To do that, it is necessary to implement the OnInit event in a custom control that will download the core.js on the client side only if the access is anonymous. Then, at the bottom of the master page, an IFrame is placed with the load of the core.js file.
One of the biggest issue with custom control development is that they don't work with the WSS_Minimal security policy. So, there are two choices : increase trust to WSS_Medium or event Full, or to increase the trust of the custom components in the CAS.
Moreover, the /pages/forms/allitems.aspx is, by default, accessible to all users, which is not really good for publishing sites. The problem comes from the fact that the access is granted by the "Limited Access". To solve this problem, the ViewFromPagesLockdown feature that removes this right needs to be activated or reactivated if needed.
Page Output Cache is not activated by default. Enabling it will allow the generated HTML to be stored in RAM. Specific profiles can be created to apply to specific sites.
Object Cache allows to store objects coming from the database in RAM. The amount of memory is configuratble (default to 100MB). To check the effectiveness of the object cache it is possible to use the ASP.NET 2 Performance Counter Cache Hit Ratio. If it is less than 90%, increase the configured memory.
Disk Based Cache is used to save roundtrips to the database for content with a consequence that content will be stored on the WFEs disk. It is configured in the web.config file, unders the <BlocCache> element.
On the topology side, it is strongly recommended to use different environment for authoring and for production and also to use the Web Content Management content deployment features to migrate content from one environment to the other. In order to optimize the environments, set the authoring environments rights to read/write while in production, set it to read-only.

Thursday, 13 November 2008 01:02:26 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Roy Osherove is also one of my favorite speaker....and, once again, while waiting for the start of his presentation, he plays guitar (Nirvana, for example), which is a good for a wake up session :-)
He starts by establishing the current unit testing market. One of his conclusion is that even NUnit has the biggest market share, there are better frameworks out there, such as Selenium, Watir, Watin, Pex and others. The problem is not the frameworks, but how to simulate or fake external systems (mock), or how to do thread testing. For example, Chess, which will be release with Visual Studio 2010, will allow to do thread testing (race condition, deadlock, etc).
Project White, on its side, is a user interface testing framework, but to test a user interface, anyway, it implies people full time dedicated to such tasks. No need to say that test maintenance is also a problem. Imagine that a method signature changes, the corresponding tests must also be changed.
Another issue to the unit testing adoption is the learning curve for frameworks. Indeed, it is already hard to sell unit testing, but if people need a lot of time to write their first unit tests, it does not bring any help.
On the IoC and Containers, some frameworks are coming, such as Ninject, Castle Windsor, Unity. The trend is also to use IoC to wrap legacy code.
On the design side, the trends are : Domain Driven Design, Design for testability, and Test Driven Design in opposition to "classic" OO encapsulation or BDUF (Big Design Up Front).
For Isolation Frameworks, Stubs, Typemock and Rhino Mocks are raising, while Mocks over specification, Nmock and Easy Mock are failing. Remember, mocks are used to fake a system or an interface, but, nevertheless, most of the mocks framework force to use a specific design.
With this number of framework, the ideal would be to have a merge between the testing frameworks and the isolation frameworks, but it is going to be difficult. Unit testing framework have no faking capabilities while Mocks frameworks have no running capabilities.
Finally, Roy explains that the main reason why Test Driven Development fails is because of a push-back from the community : how to explain that many lines of code need to be written for the tests, before writing the code itself ?

Thursday, 13 November 2008 00:59:45 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Wednesday, 12 November 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

This interactive session held by Andrew Connell was mostly about the SharePoint development. There are two possible ways to develop with SharePoint : Customization, which is about changing columns and content types, but also modifying the pages in the SharePoint Designed. Then, there is Development using features along with code. The problem is how to reconcile the two ? Some content is in the content database, some other is in the source control. And, unfortunately, it is difficult to move the modification from one environment to the other. What needs to be known is that as long a file is not customized, it is taken on the file system, from the templates. What Andrew proposes is to do only development. Of course, doing this can be tedious, especially when dealing with features, because there is no designed and a lot of CAML to write. Moreover, provisioning files requires double development. On the other side, the developers stay in Visual Studio, it is easy to package the changes and fully leverage the existing source control.
To make the developer's job easier, there are couple of goods. First, it is possible to add IntelliSense to Visual Studio when writing CAML, via the Visual Studio XML Schema Cache. Then, when developing content types and site columns, do it using the browser and SharePoint Designer to finally extract the assets using PowerShell and the STSADM custom commands in order to "featurize" everything. Of course, the WSP building process should be automatized.

 

Wednesday, 12 November 2008 22:35:45 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

This presentation, held by Bjorn Erik Olsrod from FAST, starts by explaining that the FAST ESP Web Part, able to interact with the FAST Search engine, is available for free on CodePlex and start with a first demo showing how to use it.
How this works is pretty simple. The browser sends the request to SharePoint which will query the FAST ESP. In return, the engine sends the result in XML back to the FAST ESP WebPart in XML. This XML is finally transformed to HTML using an XSLT and displayed to the user. If the user wants to change the way the results are displayed, s/he can modify the XSLT and Bjorn shows how to do it and displaying the thumbnails of the documents. He also integrates a Silverlight control showing the images documents.
In some situations, some logic might be necessary to display the result. The problem with the XSLT transformation, as it runs on the server, is that it cannot know the client context. To solve this problem, the XSLT is modified to transform the XML received from the FAST ESP to an XML encapsulated inside the HTML. Then, the final HTML displayed to the user is transformed by a Javascript. The demo showed at this time was able to adapt the amount of information displayed to the user based on the size of the browser window.
An even complex scenario is to implement a "search as you type" scenario. To do that, it is possible to implement a page stored in a SharePoint document library that will act as a web service. This service will receive the AJAX calls from the browser and will send back the result to the browser.

Wednesday, 12 November 2008 22:34:06 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
OpenXML is an ISO Standard.
It offers the key benefit of not needing the Office applications to work and generate Office document. Before, the main problem with the Office Applications Object Model was that as soon as a dialog was poped-up, the application was stuck. Moreover, someone was needed to restart the server every X hours. In other words, it was not stable, and was not designed for a server scenario. An important point regarding the security is that if a .docx file contains macros, it will not be opened by Word. Now, with OpenXML, it is a lot faster, it works on a client and on a server as well, and there is no need for Office. Eric White showed that generating a Word document with the old technique was taking 1 second per document, while generating hundred documents with OpenXML took just few seconds.
OpenXML is LINQ friendly, allowing the objects to be queriable by LINQ, but OpenXML is not a replacement for the Office Applications Object Model. Today, there is no layout or calculation support and no file conversion.
The SDK is based on the .NET 3.5 and uses the System.IO.Packaging. Moreover, it comes with different tools :
1.- OpenXMLDiff, to compare two XML documents
2.- Class Explorer to find relations between XML markups and classes
3.- Document Reflector that generates OpenXML code based on an existing document.
In his last demo, Eric shows us how to generate a Word document and directly saving it in the SharePoint document library.

Wednesday, 12 November 2008 22:29:49 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

In this presentation, Aaron Skonnard starts by making the difference between an ESB and an Internet Service Bus. The former provides Messaging Fabric, Service Registry, Naming and Access Control accross the enterprise departments and allows interoperability and connectivity between the applications.
Internet Service Bus does the same thing, but, in the Cloud.
During his first demo, Aaron starts a console application registering and publishing into the Cloud a service running on his own laptop. It was then possible to the attendess to access a feed representing the service proposed located on the laptop.
Anyway, there are several challenges : IPv4 first, making an IP address shortage that is present. Another challenge is that machines are behind firewalls and using NAT. And, last but not least, there are a lot of bad guys out there. All of these challenges make that it is really difficult to have a bidirectionnal connectivity.
Some solutions exist, such as Dynamic DNS, UPnP, or even port opening in firewalls. This last option is never well accepted by IT Professionals with good reasons.
Basically, we can see the Service Bus as an enabler to bring the Cloud into the enterprise (integration).
Then, Aaron focuses on three services offered by the Service Bus : Naming, Registry and Messaing Fabric.
On the Naming side, a solution name is linked to a customer and a set of service. It offers a hierarchical naming which offers the possibility to browse to a particular service. Basically, we have addresse like scheme://servicebus.windows.net/solution/name/name . But, this could even be scheme://solution.servicebus.windows.net/name/name with, maybe afterwards, the possibility to extend the URI on both ends.
Registry is a layer over the naming system. It offers a programmatic access for discovery and publishing into the Cloud. In other words, when a service is shutdown, the endpoint disappears from the registry. It is possible to access the registry using a simple internet browser. Indeed, the registry is exposed as nested ATOM feeds.
The Messaging Fabric uses the programming model of WCF and provides a family of bindings that corresponds to the WCF bindings.

Wednesday, 12 November 2008 00:57:19 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Hadi Hariri started by explaining that the ASP.NET MVC framework is based on the routing mechanism that has been part of the ASP.NET frameword itself since the beginning, meaning that it is available for webforms development as well. The only thing is that the ASP.NET MVC framework relies on the MvcHttpHandler class. Basically, routing are declared in the Global.asax file ordered from the more restrictive routing first to the generic one. Like we do for exception catching, meaning that if the most generic routing is declared first, this will be the only one that will be used. Routing can use constraints, such as regular expression or even on custom classes, implementing the Match method. Then, during the first demo, Hadi shows us hot to define routes using constraints and how to debug such routing. One of his advice is to always test routing. Indeed, most of the issues are coming from wrong route declaration.
On the Controllers side, the MvcHttpHandler instanciates a ControllerFactory which in turn instantiates the right controller using reflection. Reflection can be a performance killer, but in this case, the controllers are stored in the ControllerTypeCache, avoiding to make the use of the reflection everytime. Following this explanation, a demo using the Unity IoC container is showed.
Actions find the right method, bind parameters and execute the actions. Filter pipeline can be used : IAuthorizationFilter calls IActionFilter which in turn calls IResultFilter. On its side, HttpAuth delegates the authentication to another class.
About the view engine, its only role is to look for a view. It is not of its responsability to render the result.
When using the ASP.NET MVC framework, standard ASP.NET user controls can be used, but only in read-only.

Wednesday, 12 November 2008 00:56:22 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Tuesday, 11 November 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

The MVC pattern development for ASP.NET comes from the observation of several problems. First, the viewstate problem that drastically increase the payload of a page, and the difficulty to test a user interface due to the fact that the business logic is tightly coupled with the user interface.
That is why, among other drivers, the MVC pattern has been developed on top of the existing ASP.NET framework.
Basically, three roles are taking place in the pattern : the controller which is only responsible of collectng the user inputs, the model that is responsible to represent the underlying data and implements the business logic, and the view which has the only responsability of rendering the user interface.
This means that we are moving from a statefull web, using the webforms, towards a stateless model.
The MVC pattern has the advantage of being an alternative to the webforms, being testable and also extensible. Its components can also be replaced by your custom ones.
In Visual Studio, when a MVC Web application is created, it automatically asks if a unit testing project should be created. It is also possible to select the testing framework. On the project folder structure, folders are automatically created to store the views, the models and the controllers separately.

Tuesday, 11 November 2008 23:06:10 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

In this interactive session, Stef Shoffren tried to explain us how to develop, deploy and debug a SharePoint timer job.
SharePoint allows three different scenarios for timer jobs : batch data loading, scheduled tasks and one-off job executed accross the farm such as an IIS restart or a configuration change.
What should not be using this kind of job, typically, it sending e-mails to users, which should be handled by the SharePoint notification service, unless a company policy disallow this.
First, a timer job is implemented by inheriting from the SPJobDefinition class and overriding the Execute method. It is running using the system accounts which makes it the possibility to execute tasks on all the server farms. The problem is that IT Professionnals don't see Timer jobs with a good eye and see them as a threat because of the priviledges given to the system account.
To store the configuration, there are basically three ways :
1.- A property bag populated when defining the timer job
2.- Settings in the OWSTimer.exe.config
3.- External store such as a SQL database or a SharePoint list, which is the preferred way.
On the logging side, we can distinguish three ways :
1.- Using ULS, the out-of-the-box SharePoint Logging system. According to the audience, it is a real pain to put in setup
2.- Windows EventLog
3.- Enterprise Library Logging
In any of these choice, the logging must be part of the design of the timer job.
To test and debug a timer job, it is necessary to attach the OWSTimer process, which requires the admin rights
In order to deploy a timer job, we can see three means :
1.- Using a feature and a feature receiver
2.- Using an msi Windows Installer
3.- Using a custom executable that must be run from the central administration server.

Note for myself : look for WSPBuilder and WSSDW on codeplex to load data into SharePoint

Tuesday, 11 November 2008 23:04:21 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

As usual, David Chappell gave us a great presentation, as he is able to vulgarize a complex topic such as workflows and technologies that are around. He is also able to take advantage of the space that is at his disposal, making him a great presenter.
David starts by explaining that workflows, services and models are just abstractions for Workflow foundations, Dublin and Oslo.
Basically, Dublin is an extension to the Windows Server infrastructure to run and manage wCF applications and especially the ones that are using WF. It means that WCF and WF can be used independently or together.
On the other side, Oslo is focusing on the modelisation only.
But, what is WF ?
First, WF is not easy at all to use.
On the positive side, it is useful for scalable and long-running applications, such as applications that call services or that are based on user entry.
A WF must support parallel activities, meaning that a multi-thread applications should be written.
On the down side, there is no standard host process to run WF applications.
The next generation of WF aims to make the WF applications development easier.
To achieve this goal, a new designed with more activities and better runtime performance will be released. Along with that, a new workflow type, flowchart, will be available to the developer. Currently, only 2 workflow types are available : sequential, which is estimated as too simple, and the state machine which is, on this side, too complicated.
Dublin will be the default hosting process for WCF applications using WF by excellence. It will offer a persistence service to store the service state, management tools, auto start capabilities allowing to start a service without waiting a first message to start, a restart on failed service mechanism, message forwarding based on content based routing and finally, tracking features.
So, then, what is the difference between BizTalk and Dublin ?
While Dublin is focused on WCF applications containing business logic, BizTalk on its side, is focused on EAI and B2B applications, exposing applications via services. BizTalk is more for integrating applications. On the other side, Dublin will be part of the Windows Server infrastructure, making it a "free" product as opposite to BizTalk which is a paid product.
What are models ?

They are descriptive, sometimes executales, and can be linked together.
So, Oslo is a general-purpose modeling platform and is composed of a SQL Server repository to store schemas and instances, a modeling language called "M" and a modeling tool called Visual Studio "Quadrant".
This platform can be used to model the environment or a set of hardware or machine on which de application can be deployed.
M is, in turn, composed of two languages : MSchema, to describe schemas, contracts and messages that generates T-SQL statements, and MGrammar used to define textual DSLs. Olso offers also tools for creating parsers for MGrammar defined DSLs. Moreover, MSchema is itself defined by MGrammar.
On the side of VS Quadrant, this will be an application in which no code will be developed, but only models. It will be based on the same user interface model as Office 2007, using a contextual ribbon depending on the current model view.
The schema repository, as it will be stored in a SQL Server database, will be accessible by any tools able to interface with SQL Server.
Finally, WF4.0 will be available with .NET4 and Visual Studio 2010 and Dublin will be first available as a separate web download.

Tuesday, 11 November 2008 23:02:52 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Monday, 10 November 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
Basically, Dublin is an extension to IIS/WAS to monitor and manage workflows and WCF services on top of the .NET framework. It will be available in the next version of Windows Server as a new role.
It implies the hosting, the persistence, the monitoring and the messaging around those services. For example, on the hosting side, a timer and a discovery service will be available. The management will be available through an API accessible, for example, with PowerShell.
It will be possible, using a "Persist" activity, to support service outage, meaning that the "Persist" activity will be responsible of the persistence of the workflow while the target service is down. The workflow will continue when that service will be back online.
Some demo about Routing (versioning), reliability and monitoring were shown during this session.
This session was just an overview of "Dublin", but, in my opinion is, now, how to make the decision between pure WF implementation, Dublin and BizTalk to implement a sequence of activities...Couple of other sessions are scheduled during this week, so I hope to have an answer in one of them.

Monday, 10 November 2008 22:43:55 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
VS 2008 and F# in its CTP version have been released in September 2008
This presentation was about the fundamentals of the F#, which is a functional language. The vision of Microsoft regarding this language is not to replace one of the mainstream .NET language such as C# or VB.NET, but, rather, to have it as a support language or as a productivity tool.
Luke Hoban, through a complete demo, demonstrated the basics of the language, such as "let", rec to declare recursive functions, the pipelines "|>" or even the parallel execution of functions. He also demonstrated how to expose a F# code as a .NET class able to be called from a C# or VB.NET code.
The example that was taken, is the processing of financial data download from the Yahoo! website and its display in a tabular of graph way, using the graphic tools from FlyingFrog.

Monday, 10 November 2008 22:43:02 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
While waiting for the keynote, the VJs Loomis & Jones performed their show using visual effets and music with quite a success. This was a good way to make us waiting for the keynote to start.
The keynote started with a little speech of Pierre Lautaud, Microsoft Western Europe VP, who introduced the very first speaker of the TechEd : Jason Zander, General Manager of the Visual Studio team at Microsoft.
He started by telling that while at the PDC the focus was on the Azur and Cloud computing announcements, the TechEd would focus on Visual Studio and their language. At the same time, Jason and Microsoft announce Visual Studio 2010. The next version of VS is based on 4 pillars :
What the code is doing ?
Testing
Office business application
C++ empowerment
Regarding the first pillar, Jason argues that today teams are moving very fast. They have to produce more code in less time (budget) while their members are leaving and joining. Microsoft released VS 2008 to help us, but 2010 will give us even more possibilities to achieve these targets. As an example, with vS 2010 it is possible to extract the dependency diagram of assemblies or the sequence diagram in UML 2.1.1. These diagrams, by the mean of add-ins, can be embedded in the source code editor window in VS. Along this first demo, he shows us that VS 2010 is now written using WPF.
By selecting a part of a code, it is possible to see the history of that code part (who modified what and when).
While writing code during one of his demo, he showed us that the code snippet has been improved and that it is just sufficient to type "table" in an .aspx page to get the full HTML code for that table appearing. Amazing !
Oh, and, by the way..... the underscore in Visual Basic can be omitted ! Isn't that nice ?

The problem with testing is that the testers say "It does not work" while on the other side, the developer says "It works on my machine, you're wrong". The issue is the reproducability of the bugs. Microsoft is working on a new application with the codename "Camano" which is more or less a testing center. It is then possible for the testers to follow a scenario, checking for the success of the tests and, when encountering a bug, to submit it to Team Foundation Server. Along with the bug, the stack trace of the current situation, the machine configuration but also a video in the WMV format is posted in TFS allowing the developer to reproduce the problem and also to see the manipulation of the tester. Great !
With the Lab Mangement, through TFA, it is possible now to define virtual machine templates that can be deployed and used by the testers.

Jason demostrated also that a new server explorer has been added to VS : the SharePoint explorer, with some deep features support, such as the WSP or event handlers. What I am just wondering here is if it is not the end of the SharePoint designed. Why still having this application whereas all of its features will be in VS 2010 (WYSIWYG edtion, list, doc lib, etc) ?

On the packaging side, it will be possible to define transformation for the configuration files, such as web.config. It will allows the developer to avoid having a tracing flag active on the production servers.

On the C++ support, some new features are announced, such as the MFC ribbon and the multicore extensions.

Finally, the support of multi touch is implemented, coming from the Microsoft Surface developments.

Monday, 10 November 2008 22:41:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Sunday, 09 November 2008
Day-1 for TechEd, a little walk in Barcelona...
Sunday, 09 November 2008 21:49:11 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
Ranked #1 as
French-speaking SharePoint
Community Influencer 2013
Currently Reading :
I was there :
I was there :
I was exhibiting at :
I was there :
I was a speaker at :
I was a speaker at :
I was a speaker at
(January 2013 session):
I was a speaker at :
I was a speaker at :
United Nations (UN) SharePoint Event 2011
I was a speaker at :
I was there !
I was there !
I was there !
I was there !
Archive
<2017 June>
SunMonTueWedThuFriSat
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678
Listed On :
Blogroll
[Feed] Weblogger.ch
[Feed] David Chappell :: Weblog
[Feed] RockyH - Security First!
[Feed] The Project Management Podcastâ„¢
[Feed] Lunch over IP
[Feed] Intellectual Hedonism
[Feed] Upgrade to Biztalk 2006
[Feed] BizTalk Server Team Blog
[Feed] Eric Cote
[Feed] Mario Cardinal
[Feed] BizTalk Server Performance
[Feed] Julia Lerman Blog - Don't Be Iffy...
[Feed] Dotnet Fox
[Feed] Joel on Software
[Feed] Kevin Lam's WebLog
[Feed] BizTalk 101 - Back to Basics
[Feed] Peter Himschoot's blog
[Feed] Guy Barrette
[Feed] Mark Harrison
[Feed] Chanian, Raj
[Feed] A BizTalk Enthusiast
[Feed] Kevin B Smith's WebLog
[Feed] JABLOG
[Feed] BizTalk Core Engine's WebLog
[Feed] Robert Rijsdijk's BizTalk Server Weblogs
[Feed] Bryant Likes's Blog
[Feed] {CaptainK} - a.k.a Suresh Kumar
[Feed] CaPo's .NET and Enterprise Servers adventures - by Carlo Poli
[Feed] Charles Young
[Feed] Christoph .NET
[Feed] ComputerZen.com - Scott Hanselman's Weblog
[Feed] Console.WriteLine("Hello World");
[Feed] Darrell Norton's Blog
[Feed] Darren Jefford
[Feed] Dot Net Dunk
[Feed] Gilles' WebLog
[Feed] Jan Tielens' Bloggings
[Feed] Lamont Harrington's Blog
[Feed] Lamont Harrington's Blog
[Feed] Luke Hutteman's Weblog
[Feed] Matt Meleski's .Net Blog - The ABC's of .NET
[Feed] Michael Platt's WebLog
[Feed] Mike Holdorf's Blog
[Feed] Mike Taulty's Weblog
[Feed] Neopoleon.com
[Feed] Owen Allen
[Feed] Scott Woodgate's E-Business Outbursts
[Feed] Stephen W. Thomas
[Feed] The Arch Hacker's BizTalk Blog
[Feed] The BizTalk Visionary - BizTalk 2004, SOA and on
[Feed] Trace of Thought (Scott Colestock)
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2017
Yves Peneveyre
Sign In
Statistics
Total Posts: 286
This Year: 0
This Month: 0
This Week: 0
Comments: 18
Themes
Pick a theme:
All Content © 2017, Yves Peneveyre
DasBlog theme 'Business' created by Christoph De Baene (delarou)