Cloud and Microsoft technologies enthusiast architect in Switzerland RSS 2.0
# Wednesday, November 12, 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

This interactive session held by Andrew Connell was mostly about the SharePoint development. There are two possible ways to develop with SharePoint : Customization, which is about changing columns and content types, but also modifying the pages in the SharePoint Designed. Then, there is Development using features along with code. The problem is how to reconcile the two ? Some content is in the content database, some other is in the source control. And, unfortunately, it is difficult to move the modification from one environment to the other. What needs to be known is that as long a file is not customized, it is taken on the file system, from the templates. What Andrew proposes is to do only development. Of course, doing this can be tedious, especially when dealing with features, because there is no designed and a lot of CAML to write. Moreover, provisioning files requires double development. On the other side, the developers stay in Visual Studio, it is easy to package the changes and fully leverage the existing source control.
To make the developer's job easier, there are couple of goods. First, it is possible to add IntelliSense to Visual Studio when writing CAML, via the Visual Studio XML Schema Cache. Then, when developing content types and site columns, do it using the browser and SharePoint Designer to finally extract the assets using PowerShell and the STSADM custom commands in order to "featurize" everything. Of course, the WSP building process should be automatized.

 

Wednesday, November 12, 2008 10:35:45 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

This presentation, held by Bjorn Erik Olsrod from FAST, starts by explaining that the FAST ESP Web Part, able to interact with the FAST Search engine, is available for free on CodePlex and start with a first demo showing how to use it.
How this works is pretty simple. The browser sends the request to SharePoint which will query the FAST ESP. In return, the engine sends the result in XML back to the FAST ESP WebPart in XML. This XML is finally transformed to HTML using an XSLT and displayed to the user. If the user wants to change the way the results are displayed, s/he can modify the XSLT and Bjorn shows how to do it and displaying the thumbnails of the documents. He also integrates a Silverlight control showing the images documents.
In some situations, some logic might be necessary to display the result. The problem with the XSLT transformation, as it runs on the server, is that it cannot know the client context. To solve this problem, the XSLT is modified to transform the XML received from the FAST ESP to an XML encapsulated inside the HTML. Then, the final HTML displayed to the user is transformed by a Javascript. The demo showed at this time was able to adapt the amount of information displayed to the user based on the size of the browser window.
An even complex scenario is to implement a "search as you type" scenario. To do that, it is possible to implement a page stored in a SharePoint document library that will act as a web service. This service will receive the AJAX calls from the browser and will send back the result to the browser.

Wednesday, November 12, 2008 10:34:06 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
OpenXML is an ISO Standard.
It offers the key benefit of not needing the Office applications to work and generate Office document. Before, the main problem with the Office Applications Object Model was that as soon as a dialog was poped-up, the application was stuck. Moreover, someone was needed to restart the server every X hours. In other words, it was not stable, and was not designed for a server scenario. An important point regarding the security is that if a .docx file contains macros, it will not be opened by Word. Now, with OpenXML, it is a lot faster, it works on a client and on a server as well, and there is no need for Office. Eric White showed that generating a Word document with the old technique was taking 1 second per document, while generating hundred documents with OpenXML took just few seconds.
OpenXML is LINQ friendly, allowing the objects to be queriable by LINQ, but OpenXML is not a replacement for the Office Applications Object Model. Today, there is no layout or calculation support and no file conversion.
The SDK is based on the .NET 3.5 and uses the System.IO.Packaging. Moreover, it comes with different tools :
1.- OpenXMLDiff, to compare two XML documents
2.- Class Explorer to find relations between XML markups and classes
3.- Document Reflector that generates OpenXML code based on an existing document.
In his last demo, Eric shows us how to generate a Word document and directly saving it in the SharePoint document library.

Wednesday, November 12, 2008 10:29:49 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

In this presentation, Aaron Skonnard starts by making the difference between an ESB and an Internet Service Bus. The former provides Messaging Fabric, Service Registry, Naming and Access Control accross the enterprise departments and allows interoperability and connectivity between the applications.
Internet Service Bus does the same thing, but, in the Cloud.
During his first demo, Aaron starts a console application registering and publishing into the Cloud a service running on his own laptop. It was then possible to the attendess to access a feed representing the service proposed located on the laptop.
Anyway, there are several challenges : IPv4 first, making an IP address shortage that is present. Another challenge is that machines are behind firewalls and using NAT. And, last but not least, there are a lot of bad guys out there. All of these challenges make that it is really difficult to have a bidirectionnal connectivity.
Some solutions exist, such as Dynamic DNS, UPnP, or even port opening in firewalls. This last option is never well accepted by IT Professionals with good reasons.
Basically, we can see the Service Bus as an enabler to bring the Cloud into the enterprise (integration).
Then, Aaron focuses on three services offered by the Service Bus : Naming, Registry and Messaing Fabric.
On the Naming side, a solution name is linked to a customer and a set of service. It offers a hierarchical naming which offers the possibility to browse to a particular service. Basically, we have addresse like scheme://servicebus.windows.net/solution/name/name . But, this could even be scheme://solution.servicebus.windows.net/name/name with, maybe afterwards, the possibility to extend the URI on both ends.
Registry is a layer over the naming system. It offers a programmatic access for discovery and publishing into the Cloud. In other words, when a service is shutdown, the endpoint disappears from the registry. It is possible to access the registry using a simple internet browser. Indeed, the registry is exposed as nested ATOM feeds.
The Messaging Fabric uses the programming model of WCF and provides a family of bindings that corresponds to the WCF bindings.

Wednesday, November 12, 2008 12:57:19 AM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Hadi Hariri started by explaining that the ASP.NET MVC framework is based on the routing mechanism that has been part of the ASP.NET frameword itself since the beginning, meaning that it is available for webforms development as well. The only thing is that the ASP.NET MVC framework relies on the MvcHttpHandler class. Basically, routing are declared in the Global.asax file ordered from the more restrictive routing first to the generic one. Like we do for exception catching, meaning that if the most generic routing is declared first, this will be the only one that will be used. Routing can use constraints, such as regular expression or even on custom classes, implementing the Match method. Then, during the first demo, Hadi shows us hot to define routes using constraints and how to debug such routing. One of his advice is to always test routing. Indeed, most of the issues are coming from wrong route declaration.
On the Controllers side, the MvcHttpHandler instanciates a ControllerFactory which in turn instantiates the right controller using reflection. Reflection can be a performance killer, but in this case, the controllers are stored in the ControllerTypeCache, avoiding to make the use of the reflection everytime. Following this explanation, a demo using the Unity IoC container is showed.
Actions find the right method, bind parameters and execute the actions. Filter pipeline can be used : IAuthorizationFilter calls IActionFilter which in turn calls IResultFilter. On its side, HttpAuth delegates the authentication to another class.
About the view engine, its only role is to look for a view. It is not of its responsability to render the result.
When using the ASP.NET MVC framework, standard ASP.NET user controls can be used, but only in read-only.

Wednesday, November 12, 2008 12:56:22 AM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Tuesday, November 11, 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

The MVC pattern development for ASP.NET comes from the observation of several problems. First, the viewstate problem that drastically increase the payload of a page, and the difficulty to test a user interface due to the fact that the business logic is tightly coupled with the user interface.
That is why, among other drivers, the MVC pattern has been developed on top of the existing ASP.NET framework.
Basically, three roles are taking place in the pattern : the controller which is only responsible of collectng the user inputs, the model that is responsible to represent the underlying data and implements the business logic, and the view which has the only responsability of rendering the user interface.
This means that we are moving from a statefull web, using the webforms, towards a stateless model.
The MVC pattern has the advantage of being an alternative to the webforms, being testable and also extensible. Its components can also be replaced by your custom ones.
In Visual Studio, when a MVC Web application is created, it automatically asks if a unit testing project should be created. It is also possible to select the testing framework. On the project folder structure, folders are automatically created to store the views, the models and the controllers separately.

Tuesday, November 11, 2008 11:06:10 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

In this interactive session, Stef Shoffren tried to explain us how to develop, deploy and debug a SharePoint timer job.
SharePoint allows three different scenarios for timer jobs : batch data loading, scheduled tasks and one-off job executed accross the farm such as an IIS restart or a configuration change.
What should not be using this kind of job, typically, it sending e-mails to users, which should be handled by the SharePoint notification service, unless a company policy disallow this.
First, a timer job is implemented by inheriting from the SPJobDefinition class and overriding the Execute method. It is running using the system accounts which makes it the possibility to execute tasks on all the server farms. The problem is that IT Professionnals don't see Timer jobs with a good eye and see them as a threat because of the priviledges given to the system account.
To store the configuration, there are basically three ways :
1.- A property bag populated when defining the timer job
2.- Settings in the OWSTimer.exe.config
3.- External store such as a SQL database or a SharePoint list, which is the preferred way.
On the logging side, we can distinguish three ways :
1.- Using ULS, the out-of-the-box SharePoint Logging system. According to the audience, it is a real pain to put in setup
2.- Windows EventLog
3.- Enterprise Library Logging
In any of these choice, the logging must be part of the design of the timer job.
To test and debug a timer job, it is necessary to attach the OWSTimer process, which requires the admin rights
In order to deploy a timer job, we can see three means :
1.- Using a feature and a feature receiver
2.- Using an msi Windows Installer
3.- Using a custom executable that must be run from the central administration server.

Note for myself : look for WSPBuilder and WSSDW on codeplex to load data into SharePoint

Tuesday, November 11, 2008 11:04:21 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

As usual, David Chappell gave us a great presentation, as he is able to vulgarize a complex topic such as workflows and technologies that are around. He is also able to take advantage of the space that is at his disposal, making him a great presenter.
David starts by explaining that workflows, services and models are just abstractions for Workflow foundations, Dublin and Oslo.
Basically, Dublin is an extension to the Windows Server infrastructure to run and manage wCF applications and especially the ones that are using WF. It means that WCF and WF can be used independently or together.
On the other side, Oslo is focusing on the modelisation only.
But, what is WF ?
First, WF is not easy at all to use.
On the positive side, it is useful for scalable and long-running applications, such as applications that call services or that are based on user entry.
A WF must support parallel activities, meaning that a multi-thread applications should be written.
On the down side, there is no standard host process to run WF applications.
The next generation of WF aims to make the WF applications development easier.
To achieve this goal, a new designed with more activities and better runtime performance will be released. Along with that, a new workflow type, flowchart, will be available to the developer. Currently, only 2 workflow types are available : sequential, which is estimated as too simple, and the state machine which is, on this side, too complicated.
Dublin will be the default hosting process for WCF applications using WF by excellence. It will offer a persistence service to store the service state, management tools, auto start capabilities allowing to start a service without waiting a first message to start, a restart on failed service mechanism, message forwarding based on content based routing and finally, tracking features.
So, then, what is the difference between BizTalk and Dublin ?
While Dublin is focused on WCF applications containing business logic, BizTalk on its side, is focused on EAI and B2B applications, exposing applications via services. BizTalk is more for integrating applications. On the other side, Dublin will be part of the Windows Server infrastructure, making it a "free" product as opposite to BizTalk which is a paid product.
What are models ?

They are descriptive, sometimes executales, and can be linked together.
So, Oslo is a general-purpose modeling platform and is composed of a SQL Server repository to store schemas and instances, a modeling language called "M" and a modeling tool called Visual Studio "Quadrant".
This platform can be used to model the environment or a set of hardware or machine on which de application can be deployed.
M is, in turn, composed of two languages : MSchema, to describe schemas, contracts and messages that generates T-SQL statements, and MGrammar used to define textual DSLs. Olso offers also tools for creating parsers for MGrammar defined DSLs. Moreover, MSchema is itself defined by MGrammar.
On the side of VS Quadrant, this will be an application in which no code will be developed, but only models. It will be based on the same user interface model as Office 2007, using a contextual ribbon depending on the current model view.
The schema repository, as it will be stored in a SQL Server database, will be accessible by any tools able to interface with SQL Server.
Finally, WF4.0 will be available with .NET4 and Visual Studio 2010 and Dublin will be first available as a separate web download.

Tuesday, November 11, 2008 11:02:52 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Monday, November 10, 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
Basically, Dublin is an extension to IIS/WAS to monitor and manage workflows and WCF services on top of the .NET framework. It will be available in the next version of Windows Server as a new role.
It implies the hosting, the persistence, the monitoring and the messaging around those services. For example, on the hosting side, a timer and a discovery service will be available. The management will be available through an API accessible, for example, with PowerShell.
It will be possible, using a "Persist" activity, to support service outage, meaning that the "Persist" activity will be responsible of the persistence of the workflow while the target service is down. The workflow will continue when that service will be back online.
Some demo about Routing (versioning), reliability and monitoring were shown during this session.
This session was just an overview of "Dublin", but, in my opinion is, now, how to make the decision between pure WF implementation, Dublin and BizTalk to implement a sequence of activities...Couple of other sessions are scheduled during this week, so I hope to have an answer in one of them.

Monday, November 10, 2008 10:43:55 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
VS 2008 and F# in its CTP version have been released in September 2008
This presentation was about the fundamentals of the F#, which is a functional language. The vision of Microsoft regarding this language is not to replace one of the mainstream .NET language such as C# or VB.NET, but, rather, to have it as a support language or as a productivity tool.
Luke Hoban, through a complete demo, demonstrated the basics of the language, such as "let", rec to declare recursive functions, the pipelines "|>" or even the parallel execution of functions. He also demonstrated how to expose a F# code as a .NET class able to be called from a C# or VB.NET code.
The example that was taken, is the processing of financial data download from the Yahoo! website and its display in a tabular of graph way, using the graphic tools from FlyingFrog.

Monday, November 10, 2008 10:43:02 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>
While waiting for the keynote, the VJs Loomis & Jones performed their show using visual effets and music with quite a success. This was a good way to make us waiting for the keynote to start.
The keynote started with a little speech of Pierre Lautaud, Microsoft Western Europe VP, who introduced the very first speaker of the TechEd : Jason Zander, General Manager of the Visual Studio team at Microsoft.
He started by telling that while at the PDC the focus was on the Azur and Cloud computing announcements, the TechEd would focus on Visual Studio and their language. At the same time, Jason and Microsoft announce Visual Studio 2010. The next version of VS is based on 4 pillars :
What the code is doing ?
Testing
Office business application
C++ empowerment
Regarding the first pillar, Jason argues that today teams are moving very fast. They have to produce more code in less time (budget) while their members are leaving and joining. Microsoft released VS 2008 to help us, but 2010 will give us even more possibilities to achieve these targets. As an example, with vS 2010 it is possible to extract the dependency diagram of assemblies or the sequence diagram in UML 2.1.1. These diagrams, by the mean of add-ins, can be embedded in the source code editor window in VS. Along this first demo, he shows us that VS 2010 is now written using WPF.
By selecting a part of a code, it is possible to see the history of that code part (who modified what and when).
While writing code during one of his demo, he showed us that the code snippet has been improved and that it is just sufficient to type "table" in an .aspx page to get the full HTML code for that table appearing. Amazing !
Oh, and, by the way..... the underscore in Visual Basic can be omitted ! Isn't that nice ?

The problem with testing is that the testers say "It does not work" while on the other side, the developer says "It works on my machine, you're wrong". The issue is the reproducability of the bugs. Microsoft is working on a new application with the codename "Camano" which is more or less a testing center. It is then possible for the testers to follow a scenario, checking for the success of the tests and, when encountering a bug, to submit it to Team Foundation Server. Along with the bug, the stack trace of the current situation, the machine configuration but also a video in the WMV format is posted in TFS allowing the developer to reproduce the problem and also to see the manipulation of the tester. Great !
With the Lab Mangement, through TFA, it is possible now to define virtual machine templates that can be deployed and used by the testers.

Jason demostrated also that a new server explorer has been added to VS : the SharePoint explorer, with some deep features support, such as the WSP or event handlers. What I am just wondering here is if it is not the end of the SharePoint designed. Why still having this application whereas all of its features will be in VS 2010 (WYSIWYG edtion, list, doc lib, etc) ?

On the packaging side, it will be possible to define transformation for the configuration files, such as web.config. It will allows the developer to avoid having a tracing flag active on the production servers.

On the C++ support, some new features are announced, such as the MFC ribbon and the multicore extensions.

Finally, the support of multi touch is implemented, coming from the Microsoft Surface developments.

Monday, November 10, 2008 10:41:00 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Sunday, November 09, 2008
Day-1 for TechEd, a little walk in Barcelona...
Sunday, November 09, 2008 9:49:11 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
TechEd2008
# Monday, September 29, 2008
TechEd is coming soon. It is time to register...
Monday, September 29, 2008 9:01:42 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
English | Technical
# Thursday, September 25, 2008
Tired of processes ? Let's give a chance to practices. Ivar Jacobson gave a presentation at the Regional Architect Forum in Zurich about the methodology he proposes for software development.
Thursday, September 25, 2008 11:17:41 AM (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
English | Technical
# Friday, July 04, 2008
About the difference on being at the mixing console and being in front of the microphone
Friday, July 04, 2008 12:39:51 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
podcasting
# Saturday, June 28, 2008
If you are interested in function generators and oscilloscope, I am selling some pieces of my electronic lab I had since lot of years
Saturday, June 28, 2008 5:18:11 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
English | Loisirs
Google Cloud Platform Certified Professional Cloud Architect
Ranked #1 as
French-speaking SharePoint
Community Influencer 2013
Navigation
Currently Reading :
I was there :
I was there :
I was exhibiting at :
I was there :
I was a speaker at :
I was a speaker at :
I was a speaker at
(January 2013 session):
I was a speaker at :
I was a speaker at :
United Nations (UN) SharePoint Event 2011
I was a speaker at :
I was there !
I was there !
I was there !
I was there !
Archive
<November 2008>
SunMonTueWedThuFriSat
2627282930311
2345678
9101112131415
16171819202122
23242526272829
30123456
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2022
Yves Peneveyre
Sign In
Statistics
Total Posts: 290
This Year: 0
This Month: 0
This Week: 0
Comments: 20
Themes
Pick a theme:
All Content © 2022, Yves Peneveyre
DasBlog theme 'Business' created by Christoph De Baene (delarou)