Cloud and Microsoft technologies enthusiast architect in Switzerland RSS 2.0
# Wednesday, April 01, 2009

The next version of the framework will be more focused on dynamic languages and will keep the C# and VB.NET in parallel.
It also introduces extensions to support the parallelism in a declarative and imperative way. In that extensions, we will have PLINQ (for Parallel LINQ) to parallelize LINQ queries, TPL (Task Parallel Library) and CDS (Coordination Data Structure).
One of the new feature is the ability to install the client profile on every configuration whereas for the previous versions, it needed to be installed on a clean infrastructure.
Among other things, new controls for WPF, multitouh support under Windows 7 and an easyness to develop user interface based on datasource even though the Entity Framework is not yet supported in the presented version. The MVC will be implemented with a dynamic data support.
On the WCF side, some enhancements have been implemented such as SOAP over UDP and more WS-* support. Moreover, building a RESTfull application will be simplified.
With the .NET framework 4, WF and WCF will work together as they will be completely integrated. Nevertheless, and big change, workflows will be defined in XAML-only by default. On the other side, more activities will be available in the toolbox.
A lot of improvements in terms of performance and scalability have been done and and a new workflow type is now available : FlowChart. Now, activities can receive arguments and return values.
When developing custom activities, it will be possible to define a custom design using a WPF designed
Basically, and in short, the 4th version of .NET allows a workflow (WF) to be exposed using WCF (interface), while, on the hosting side, it will rely on Dublin which is a Windows Server 2008 Application Role extension.

Wednesday, April 01, 2009 12:57:04 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
# Tuesday, March 31, 2009

Once again, I have the pleasure to participate to the TechDays in Geneva at the CICG. This year, not only I will be present at the booth of CTP, but we will be one of the Premium Sponsor of the event and we also have a speaker on stage to talk about Velocity. Before saying too much about this new technology coming out really soon, just note that it will be the distributed cache solution from Microsoft. Be there at the session to know more of that new technology.

Today, during the setup of the booth (by the way, come to us to participate to our Multi-touch contest and see a multi-touch application running), I had the possibility to talk with people from Wygwam and playing with their Microsoft Surface. That is just unbelievable. So much that, to appreciate it, it must be tested... As an example, there was a drum playing game which allows to have a lot of fun. I want one of them :-) At least, it is a good way to have guests every day at home :-)

For tomorrow, a bit of Visual Studio 2010 with the .NET framework 4 and ASP.NET, Geneva - the identity management system, not the city - SQL Data Services, Mesh and WF with Dublin. Sounds promising.

Tuesday, March 31, 2009 11:19:55 PM (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
# Thursday, February 12, 2009

I am just coming from the UNIL where I gave a presentation at the Forum Horizon 2009 about the day-to-day work of a computer science engineer. The Forum Horizon is organized by the Office cantonal d'orientation scolaire et professionnelle in order to present the different possibilities of jobs to second year gymnasium students, from the air controller to the scentific police.
It is the 6th year in a row I present this topic and this year I decided to change radically the format of the presentation. Rather than having a bunch of slide with bullet points, I took the option of have big pictures on the background and very few words highlighting the subject. At then end, it was not easy to fing the correct pictures to illustrate the slides, but I think I have been quite successful. The most difficult is to understand what the audience want to know. The goal is to be non-technical and to try to explain clearly what we do as an engineer (and software architect). Moreover, the people in the room still don't know what they want to do. The idea is to help them to make their choice and not necessarily to make them selecting this job.
I posted the presentation here.

Thursday, February 12, 2009 4:04:24 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

# Thursday, January 15, 2009

Summary :
F# is a new language that is coming in the pipe of Microsoft for the Visual Studio platform. It aims to tackle the functional programming paradigm eventhough it is possible to use the imperative or object oriented programming.
Robert Pickering starts his book by explaining the basics of F#, how to get and how to use the tools. Then, the book describes the F# syntax to be used in the three language paradigm, functional programming, imperative programming and finally object oriented programming. Among other things, the notion of type inference is presented. Once the syntax is presented, the book describes the way to develop web, windows or even WPF applications using the .NET framework. Data access is also addressed using the current technologies available, such as ADO.NET or LINQ. Then, a quick look at DSLs, compilation and code generation is given, presenting the lex and yacc tools coming with the language. Finally, a full chapter is dedicated to the interoperability between .NET and F#, because even if F# is based on the CLI, the language introduces several types that are not available in the other .NET languages (C# or VB.NET).

Review :
Discovering a new language is really interesting and with F#, it is the occasion to see a new paragigm, functional programming. In really short, with F#, everything is a value, even a function. It means that you can use a function as a function parameter. The concept of type inference is also very attracting. The book is very easy to understand and a lot of little examples are explained in details, making the reading very fast. The first half of the book is dedicated to the language itself. The second half is more on using the .NET framework and I would say that it is the less interesting of the book. Indeed, during the first part, you have came across various examples using types and classes of the framework and user interface development being web or windows, or data access meaning that the second part does not bring a lot a information. Once you know these topics from the .NET documentation or from another book and once you have read how to access the .NET BCL from F#, then this part is pretty straightforward and not really useful. Moreover, the examples used to depict the topics are more explaining how to use the BCL classes than the language itself. Nevertheless, the last parts discussing the interoperability and the possibility of generating DSLs are more interesting.
My final words are that it is a very intersting book if you want to see another land (functional programming). Unfortunately, on my bookshelf, I also have "Expert F#" that I just opened to see what is inside and I saw that it takes the explanations and descriptions of the language from the beginning. If I had knew that before, maybe I would have bought this one instead. So, if the goal is just to scratch the surface of F#, "Foundations of F#" is the best suited, otherwise, if the goal is to go really deeper in the topic, then prefer "Expert F#" (a review of that one will be posted).

Thursday, January 15, 2009 9:21:33 AM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
Book Review | Programming | Technical
# Friday, January 09, 2009

Yesterday, the Microsoft's CEO Steve Balmer announced the public availibility of the first beta release of Windows 7. I had a chance to have a quick look at it and the first impression I had was : "it's fast. And ?".

Stop kidding, but if you don't like the Aero style user interface, be prepared to be a bit overloaded. They have put one more layer of Aero and the new glassy toolbar simplifies the application navigation by replacing the multiple application icon by only one and allowing the user to see a preview of the running application when hovering the mouse on the icon.

I played few minutes with it, using minesweeper as well :-) and I like pretty well the user interface, the rapidity and also the fact that it did not crash during the hour I tested it.

A nice little feature is the possibility to display the desktop by just moving the mouse. On the other side, Microsoft says connecting home devices will be really easier than before (home computer). That are only few features that will be in Windows 7. This new version of Windows will probably be released earlier than originally expected in order to try to make people forget about the Windows Vista flop.

Sounds promising...

Friday, January 09, 2009 3:23:25 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

# Tuesday, January 06, 2009

Back to business from some vacations...
First of all, I would like to wish a happy new year and all the best to the readers.

As it is the case every year, people are trying to take resolutions for the new year and I am afraid that I am one of them.
Regarding this blog, I have a couple of points I would like to address this year.

The very first one is to make some cleanup in the blog, such as clearing the spam in the trackbacks and reorganizing the categories.
Then, I would like to upgrade to the last version of the engine, v2.2, released last october.
One goal I would like to achieve this year is to be more active and posting more regularly. Once a week for example. This is going to be quite challenging, I know that already, because the goal is to produce content but not only for the sake of writing stuff.
Finally, and this is something pending since few years now (since I left LooKware in fact), the main web site really needs to be put online. On this side, some work needs to be done to find the right look and feel and also to write the content.

Quite a lot on the plate, and I hope that at the end of 2009, most (all) of the points will be addressed.

Tuesday, January 06, 2009 10:15:00 AM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
Blog Life
# Friday, November 14, 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Today, we can extract 7 major trends in the software development process. First, the search becoming a lot more important. Indeed, searching for files, emails and finding a way to organize the information is now crucial. It is the same when we write code. Then, a new user-experience is coming, using new paradigm in the way the development tools are proposed (Rich User Interface). In terms of agility, there is a need for Intellisense and Quick Info. More and more, the develompent is done in a declarative way that allows the develompent tool to do all the plumbing for us. The three remaining trends are the support for legacy code, the Cloud that influence the next steps to adapt Visual Studio, and the concurrency.
During her first demo, Karen Liu showed us the new functionnalities of the QuickSearch that now works accross languages (C#, VB.NET, etc) for types. It offers also a "search as I type" functionnality. It can also be used to search for a file.
With Visual Studio 2010, when selecting a symbol, all references to it are highlighted. But it is done only for the one that have the same signature or type. The user interface, written in WPF, can easily be extended and Karen showed us how all references to a symbol present in a file can be displayed in the margin of Visual Studio.
Adding unit tests is simpler and handled directly by the user interface with an automatic generation of the classes. The Intellisense has now a "filter as you type" feature that speeds up the code writing.
Arriving on a running project can be difficult. Even more when a lot of code is present. The calls dependency is a new feature that allows the developer to see what the code is calling, and which part of the code is calling the one selected.
Another great feature is the historical debugger. Imagine that the runtime hits a breakpoint, it is now possible to come back in the code and to execute step by step the program. In other words, it is a kind of replay. The Functions and Diagnostics Event allows to see what are the events that occured and also the exceptions raised, being caught or not. It is also possible now to record the execution of a program in order to send this to someone to reproduce the scenario.
This session was the last of a great TechEd. Not a lot of things were announced, but the content was interesting and technically advanced for some of them.
Next year, the TechEd will take place in Berlin between the 2nd and the 6th of November 2009 at the Messe Berlin (Germany). Hope to see you there.

Friday, November 14, 2008 10:48:57 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Udi Dahan explains us that today, a lot of books on patterns and practices are existing on the market. But reading them and knowing them by heart does not help if we don't design the application with flexibility in mind. He takes the examples of the Visitor and the Strategy patterns that could easily be overused by architects leading to a collaps of the application structure. The goal here is not the have an absolute flexibility, but to have flexibility where it makes sense and where it is needed. The same phenomenon occured with the use of hooks.
So, Mr. Dahan tells us that we should make the roles explicit and by implementing them with interfaces. Before, when we had an entity that needed to be validated, we implemented a .Validate() method in the entity. That made sense, because only that entity was able to know how to validate itself. But, what happends if in the entity, another one is linked that, in turns, need to be validated. It could be fine if the call sequence was always the same. If it is not, then the trouble comes. Then, the goal is to identify the roles and to make them as interfaces. So, in the case of a validator, an entity templated ("<T entity>") interface should be created and a specific entity validator implementing this interface should be implemented as well, so that a Service Locator can return such entity that will be called for validation.
It is also possible to use that same pattern to differentiate the roles that can be applied to a same entity. Mr. Dahan uses that same pattern to implement message handler.

Friday, November 14, 2008 10:47:18 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

The story of Metropolis has been running since a while now and the goal of Metropolis was to draw a parallel between Cities, Transportation, Manufacturing and Applications.
Pat Helland compares IT Shops with Cities, Manufactured Goods with Structured Data and Operations, Retail with Business Processes and Factories and Buildings with Applications. He starts by telling that initially, buildings contained people and stuff, but evolved into a model where bringing in and out stuff and people in addition to connecting them became more and more important. The same with applications where traditionnaly, they were built to interface with people, containing data and performing operations. This is changing in the sense that it integrates more the personal view of work. Moreover, the dissemination of the knowledge increases and the tendancy to directly perform cooperative operations increases as well.
Pat makes the distinction between three kind of buildings and applications : High Road, Low Road and No Road.
On the building side, we can disringuish a High Road one by its facilities to evolve little by little gaining character. They receive investment and they will be adding new extensions or wings. It is the same with High Road Applications, that are typically Line-of-Business applications, requiring a very high availability. We can also call them packaged applications.
The Low Road buildings have a lower cost, but they have no-style and a high turnover. Most often, its inexpensive constructions. On the other side, they are highly adaptable. For applications, we can compare this model to application built by end users, without the need of the IT department. And if the application is no more useful, it can be thrown away. Typically, Excel spreadsheets, Access databases or even SharePoint are considered as Low Road applications by Pat Helland.
Then, it is presented the shearing layers for buildings, such as Stuff, Space Plan, Building Infrastructure, Skin, Structure and Site. Everyone of these layers have their own lifetime, from 60 years for the structure to 5 years for the Space Plan. Again, the same parallel can be done for applications.
The goal, and the conclusion of the presentation, is to leverage the middleware and to build applications for reuse, in order to reduce their costs. This can be done by asking two questions : Who Makes Money ? and Who Saves Money ?
In terms of application component reuse, today, there is not yet a market place. On the Middleware side, the vendors make money and the users save money at the condition that they have to work on a SOA Middleware. For service reuse, it is non-existent today. Moreover, there is a need to standarize on schemas, contracts and branding. Finally, applications are evolving to become services and they should be designed for change.

Friday, November 14, 2008 10:45:07 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Basically, S+S is about externalizing services like we have done with the power. Instead of having every home producing power, we have now plants that are producting it, and home connecting to the grid to get what they need. It is more or less the same with the Cloud. Someone is hosting the resources for you, no need to care about scalability, failover and so on, letting you concentrate on the development of your application. This allows also to publish your own services into the Cloud, making them available for others. A parallel can be done with the transport systems :
A car corresponds to the on-premises infrastructure. But it has a maintenance cost and needs to be fixed or patched.
Car renting is better and is like hosting.
On its side, the train is equivalent to the Cloud. No need to care about the maintenance at all, but, the downside is that you can not change the schedule or where it goes.
So, when looking for the Cloud, it is looking for availability, scaling, but you have no control on it. It also means that the Cloud is not the silver bullet and is not for everything.
To manage the identity, the .NET Services (one of the Cloud services) relies on external authorities. The enterprise defines the identities and roles and build a trust relationship with an external authority that will be trust by the Cloud. It means also that .NET Services needs to support several technologies.
It leads to, at least, two challenges : to focus on the use of SOA, and on resources decentralization.
To support the ID management, Cloud is using tools such as claims, tokens and Security Token Services.
Finally, to control the identities on .NET Services, there is a MMC to manage them.

Friday, November 14, 2008 10:43:05 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
# Thursday, November 13, 2008

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Mike Flasco starts by executing an application that downloads the slides from the cloud.....
The goal of the presentation is to explain what are the Data Services proposed by Microsoft in the Cloud. With Data Services, not only the entities are stored, but also the relationships as links. Moreover, it directly supports the paging or direct query.
The HTTP verb controls the operations done on the data : GET for SELECT, POST for INSERT and DELETE for....DELETE.
It means that every resources is accessible through a URI.
to setup a model in Data Services you need :
1.- Create the data model in Visual Studio, generating a edmx file
2.- Create a .svc service inheriting from DataService<DataModel>
3.- Setting the access rights
4.- Call the service => That's it !
Entity exposition can be set per entity (read only, read/write, all)
In order to support row level security, it is possible to implement interceptors that are executed before the GET, PUT, etc.
When doing a LINQ request, under the cover, it sends a URI request to the Data Services and the same thing happens then we use the object model.

Thursday, November 13, 2008 8:50:05 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Search Server 2008 is a new brand for SharePoint Server 2007 for Search.
Among other things, it has a simplified install, an enhanced administration interface, new federation capabilities, some performance and stability enhancements and does not have document limits.
Search Server 2008 coms in two version : Search Server 2008, which allows a distributed topology and licensed per server, and Search Server 2008 Express for a single server install only, which is free. In terms of features, there is no difference at all.
Michal Gideoni explains when we need to customize the Search. Basically, every time the user interface needs to be modified, the custmozation can be put inplace, and starts with a demo on how to create a new search tab.
To customize the user interface, most of the modifications can be done through the modification of the XSLT generating the HTML, but it is important to know than the Search Web Parts are sealed so not inheritable.
The search supports two types of queries : based on keywords or using
SQL statements. But both of them are using the same flow.
The Search Query web service is useful for remote applications or is accessible from the Office Research Pane and allows the two methods for queries.
The federation is a way to get the search results from other search engine. It sends the query to them, received the result and transforms it to format the output. The Federation Web Part can only be connected to a simple federated search location defined in MOSS and supports synchronous or asynchronous rendering.
Currently, the Federation Web Part supports two types of locations : OpenSearch 1.0/1.1 that must return XML results or even simpler ATOM or RSS/RDF feeds, and Local Search Index. In case OpenSearch is not available or the result is not in XML, it is possible to implement a custom connector that will wrap the result and generate the XML needed by the web part. To do that, it is needed to :
1.- Create an aspx page that take a parametrized URL
2.- Connect to the federated search engine and generate an RSS feed
3.- In the Federation Web Part, connects to that aspx page.

Thursday, November 13, 2008 8:48:54 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Todays, applications are facing mainly three challenges : distributed work, code coordination leading to complexity, and, finally, the management and the tracking of these applications is not trivial.
The goal of a WF program is to coordinate work, but the problem with such applications it that it needs a hosting process to run. According to Jon Flanders, the hosting code is taking most of the development time. Moreover, this hosting process code is not the most interesting one to write.
The aim of Dublin is to be the host process by choice of WF applications and this platform is coming with two main applications : Visual Studio Designed using WPF, and an improved version of Visual Studio Debugger. Whereas with the .NET framework 3.5 two execution models are available, State Machine and Sequential, WF 4.0 comes with a third one : Flowchart. This execution model allows to design workflows with branches, loop that are more complex that with sequential workflows.
Another problem with 3.5 is that parallel activities were not running in separated threads, meaning that a pseudo parallelism was in place. WF4.0 introduces a true parallelism and the runtime is able to coordinate multiple activities at the same time.
A little comment from Jon says that code activity should not be used, because it hides the purpose of the workflow whereas a goal of a workflow should be understandable by looking at the design itself.
WF 4.0 introduces compensation through a dedicated activity (Compensable Activity) and also two types of correlation : context based or content based (like BizTalk). The former one is only working in Request-Response scenario. The latest, on its side, one-way communication can also be used.
Finally, Dublin supports tracking and configuration through dedicated consoles applications.

Thursday, November 13, 2008 8:47:48 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

This general session was about how to reduce cost and carbon by sharing the infrastructure. Pat Helland also showed how the new Microsoft DataCenter will be build in Chicago and the new concept of containers for the servers. This datacenter will contain 100000 servers compartimented in 50 separated containers.

Thursday, November 13, 2008 8:46:28 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Ishai Sagi starts by explaining what is a Field Type.
A custom field is needed when custom validation, custom controls, accessing a data sources or new properties are needed. For example, to create a phone field.
The downside is that a custom field does not inherit everything from the parent field. Moreover, it does not integrate with Office. In other words, it is not displayed in the Office fields bar. In the datasheet view, the custom controls are read-only and they are not included when you export a list to Excel. Finally, a complex data structure may be hard to integrate into filters, grouping, search and querying.
Every custom field is composed by three components : and XML definition file, a class library and a field class.
Among the three most common overrides there are : DefaultValue to dynamically assign a default value based on the user context for example, GetValidateString that returns the value that will be stored in SharePoint, and finally, FieldRenderingControl for a custom User Interface.
After two demos showing how to create a simple custom field and the second showing a data validation custom field, Ishai advises that when developing a custom field, we should always think that it could be used as a site column.
The Rendering Pattern allows to define how a custom control is rendered. It does not require any compiled code and, instead, is defined in a XML definition file.

Thursday, November 13, 2008 8:43:46 PM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Andrew Connell, MOSS MVP, starts by exposing some of the actual problems we can encounter when deploying a MOSS publishing site. First, when browsing such site, IE displays a warning message saying that the Name ActiveX must be trusted and installed on the client machine in order to see the web site, which is really not ideal. For that, there is a workaround explained in the MS KB 931509.
The second problem is how to minimize the page payload due, mainly to the table layout used in SharePoint. A first recommendation would be to use a CSS based design. It will be even more efficient because the CSS files are cached. Then, enabling the HTTP compression reduces by 71% the amount of data transfered. And finally, on a standard SharePoint page, Andrew showed us that 26% of the payload is used to transfer the core.js file (54K), which is used only for the Site Actions menu. The branding and content consumes more or less 45% of the payload, while the other scripts and CSS files 29%. The problem is that core.js is not used for internet facing applications, but if it is suppressed, we are in a non MS supported scenario. So, rather than putting core.js at the top of the master page, it is better to load it asynchronously. To do that, it is necessary to implement the OnInit event in a custom control that will download the core.js on the client side only if the access is anonymous. Then, at the bottom of the master page, an IFrame is placed with the load of the core.js file.
One of the biggest issue with custom control development is that they don't work with the WSS_Minimal security policy. So, there are two choices : increase trust to WSS_Medium or event Full, or to increase the trust of the custom components in the CAS.
Moreover, the /pages/forms/allitems.aspx is, by default, accessible to all users, which is not really good for publishing sites. The problem comes from the fact that the access is granted by the "Limited Access". To solve this problem, the ViewFromPagesLockdown feature that removes this right needs to be activated or reactivated if needed.
Page Output Cache is not activated by default. Enabling it will allow the generated HTML to be stored in RAM. Specific profiles can be created to apply to specific sites.
Object Cache allows to store objects coming from the database in RAM. The amount of memory is configuratble (default to 100MB). To check the effectiveness of the object cache it is possible to use the ASP.NET 2 Performance Counter Cache Hit Ratio. If it is less than 90%, increase the configured memory.
Disk Based Cache is used to save roundtrips to the database for content with a consequence that content will be stored on the WFEs disk. It is configured in the web.config file, unders the <BlocCache> element.
On the topology side, it is strongly recommended to use different environment for authoring and for production and also to use the Web Content Management content deployment features to migrate content from one environment to the other. In order to optimize the environments, set the authoring environments rights to read/write while in production, set it to read-only.

Thursday, November 13, 2008 1:02:26 AM (GMT Standard Time, UTC+00:00)  #    Comments [0] -

<Disclaimer>This is personal notes of what I retained during the session. This can be incomplete, partially right or wrong. It is just  part of the notes I took and what retained my attention. Nothing prevents the user to get more information on their favorite web site.</Disclaimer>

Roy Osherove is also one of my favorite speaker....and, once again, while waiting for the start of his presentation, he plays guitar (Nirvana, for example), which is a good for a wake up session :-)
He starts by establishing the current unit testing market. One of his conclusion is that even NUnit has the biggest market share, there are better frameworks out there, such as Selenium, Watir, Watin, Pex and others. The problem is not the frameworks, but how to simulate or fake external systems (mock), or how to do thread testing. For example, Chess, which will be release with Visual Studio 2010, will allow to do thread testing (race condition, deadlock, etc).
Project White, on its side, is a user interface testing framework, but to test a user interface, anyway, it implies people full time dedicated to such tasks. No need to say that test maintenance is also a problem. Imagine that a method signature changes, the corresponding tests must also be changed.
Another issue to the unit testing adoption is the learning curve for frameworks. Indeed, it is already hard to sell unit testing, but if people need a lot of time to write their first unit tests, it does not bring any help.
On the IoC and Containers, some frameworks are coming, such as Ninject, Castle Windsor, Unity. The trend is also to use IoC to wrap legacy code.
On the design side, the trends are : Domain Driven Design, Design for testability, and Test Driven Design in opposition to "classic" OO encapsulation or BDUF (Big Design Up Front).
For Isolation Frameworks, Stubs, Typemock and Rhino Mocks are raising, while Mocks over specification, Nmock and Easy Mock are failing. Remember, mocks are used to fake a system or an interface, but, nevertheless, most of the mocks framework force to use a specific design.
With this number of framework, the ideal would be to have a merge between the testing frameworks and the isolation frameworks, but it is going to be difficult. Unit testing framework have no faking capabilities while Mocks frameworks have no running capabilities.
Finally, Roy explains that the main reason why Test Driven Development fails is because of a push-back from the community : how to explain that many lines of code need to be written for the tests, before writing the code itself ?

Thursday, November 13, 2008 12:59:45 AM (GMT Standard Time, UTC+00:00)  #    Comments [0] -
Google Cloud Platform Certified Professional Cloud Architect
Ranked #1 as
French-speaking SharePoint
Community Influencer 2013
Currently Reading :
I was there :
I was there :
I was exhibiting at :
I was there :
I was a speaker at :
I was a speaker at :
I was a speaker at
(January 2013 session):
I was a speaker at :
I was a speaker at :
United Nations (UN) SharePoint Event 2011
I was a speaker at :
I was there !
I was there !
I was there !
I was there !
<April 2009>
About the author/Disclaimer

The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2022
Yves Peneveyre
Sign In
Total Posts: 290
This Year: 0
This Month: 0
This Week: 0
Comments: 20
Pick a theme:
All Content © 2022, Yves Peneveyre
DasBlog theme 'Business' created by Christoph De Baene (delarou)