Your .NET and Microsoft technologies specialist in Western Switzerland RSS 2.0
# Monday, 08 August 2016

Currently having some Linux machines on which DeployR Enterprise 8.0.5 must be deployed, I faced a nice bug in the DeployR configuration utility.

First, I started to deploy a “Microsoft R Server on Linux” from the Azure marketplace. The setup is rather simple, and quite fast, and the original setup would be fine if you want to use the default ports for DeployR and RServ.

For the little story, the default ports have changed between the version 8.0.0 and the version 8.0.5. Indeed, whereas in the version 8.0.0 the default ports were 8000, 8001 and 8004 for DeployR, DeployR with SSL and RServ respectively, Microsoft adapted the default port numbers to the new version number. Thus, the new default ports are 8050, 8051 and 8054 for the same services. Basically, they used the first 3 digits for the version.

But, if you want to change these ports, you can do it by running the utility /opt/deployr/8.0.5/deployr/tools/adminUtilities.sh .

Selecting the menu “6. Change DeployR Ports”, the utility offers the following options :

1 DeployR Port Option (Sub-Menu) 2 ================================= 3 A. Change Tomcat Connect Port (Currently: 8050) 4 B. Change Tomcat SSL Port (Currently: 8051) 5 C. Change Tomcat Shutdown Port (Currently: 8052) 6 D. Change DeployR Rserve Connect Port (Currently: 8054) 7 E. Change DeployR Event Port (Currently: 8056) 8 9 M Back to Main Menu 10 Q Quit 11 12 Enter an option: 13

All the options work, EXCEPT the option “D”. If you try to configure the option “D”, it will completely overwrite the tomcat configuration file (located in /opt/deployr/8.0.5/tomcat/tomcat7/conf/server.xml .

What it does is that it replaces the original content by what should be the new configuration of the RServ service (below, I wanted to change the port number to 8004). In other words, the content of the server.xml file would be :

1 interactive off 2 source /opt/deployr/8.0.5/rserve/RScripts/source.R 3 encoding utf8 4 remote disable 5 port 8004 6 workdir /opt/deployr/8.0.5/rserve/workdir/Rserv8.0.5

So, before doing any mistake, make a copy of the tomcat configuration file, and, if you change the port number of RServ, restore the configuration file from the copy after the change in the utility. And also, make the change manually in the /opt/deployr/8.0.5/rserve/Rserv.conf

Monday, 08 August 2016 15:07:00 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
DeployR | Microsoft | RServ
# Sunday, 15 May 2016

DSC_3537I am quite used to give public speeches, as multiple times I was speaker at conferences or internal meetings. But, several weeks ago, the speech I gave had mainly 2 objectives : giving an overview of IBIS, and, promoting our internal Toastmasters club. For this occasion, I thought it would be interesting to share how I usually prepare speeches, taking this recent meeting as an example.

To set the context, the primary objective was to promote our internal Toastmaster club, open to all our employees (but not only), during a 45 minutes presentation of what is public speaking, giving examples of Toastmasters activities, and an overview of the Toastmasters clubs. As part of the pure Toastmasters example, there was a prepared speech with its evaluation and a session of table topics. For the prepared speech, the TM group decided to use one of my previous Competent Communication speech, which had an advantage of being a little bit business oriented. Indeed, I used this speech as a teaser and brief overview of IBIS, one of my current favorite topic. I can’t say I am yet fluent in IBIS, but, the advantage of knowing a topic that only few used to practice is that there is a lot to say. On the other side, this advantage can be a problem too.

The main two objectives of all the prepared speeches at Toastmasters (that should apply to all speeches generally too) are that they should be well structured, with an introduction, a body, and a conclusion; to stay within the 5 to 7 minutes length (at least for the CC speeches). I am not going to dive into the details of how the objectives were achieved (or not), but I will rather go in the way I prepared the speech.

I like to put everything on paper, being virtual or real. So, the first step was to split my preparation sheet in three parts : Introduction, Body, and Conclusion. Then, I wrote what I wanted to explain in the introduction, such as why the people in front of be should be interested in the topic and what they would takeaway, introducing the topic. Then, going into the details of the topic, in this case, the IBIS notation and how to use it. Finally, concluding with the main benefits of the notation and an example of how to use it.

At the end of this first step, I ended up with about 5 pages of text, which is obviously too long. As an average, a good speech rate is around 120 words per minute. Having this number in mind, I started removing content that was not absolutely necessary to the understanding of the topic. That is a difficult part, as you have to put yourself in the shoes of the audience and ask yourself whether a piece is required or not. The advantage with this technique is that it helps going “straight to the point”. At the end, I had only 2 A4 pages remaining, which was reasonable.

Obviously, one does not speak like we write, thus, it is not important at all to spend lot of time on the correctness of the text, nevertheless, what I usually do anyway is to read out the text several times. It has the advantage to give a first timing. And, the more I read it, the more I adjust the wording and expressions, whichI write in the text. Though, the speech is not natural. It is still a text read out, not a speech.

After that step, I am able to extract, and write separately from the text, one and only one word, for each smaller part of the speech, such as the intro, the examples, some part of the body, and the conclusion. From that point on, I only use this list of words for my rehearsals. These words are enough for me to remember the background text, which allows me to get free from the original text. Repetition after repetition, the speech becomes natural, some words only used for texts get replaced by oral words. Of course, the speech will not be exactly the same each time. Some variations will occur, but, the essential of the message is kept.

Another advantage of giving speeches of 5 to 7 minutes is that you can repeat a lot of times. I don’t remember how many times I repeated the speech, but, when you start having the speech completely in the head, it means that you are getting fluent with it and you can consider you are ready to give the speech in front of the audience. Additionally, you realized that you no longer need the notes, and that not focusing on the phrases you have to say will help you using other techniques in your speech, such as vocal variety to emphasize some words or phrases, or the eye contact. Lastly, and especially if you are not used to speak in front of people, repetition will help having self-confidence, decreasing drastically the stress on stage. Arriving at this point, I was ready to deliver my speech.

To summarize, here are some key points of what I do to prepare a speech :

  • After documenting the speech’s topic, write down the text of the speech
  • Repeat the text by reading it loud
  • Summarize the speech using one word per part. I would say maximum 10-12 words for a 6 minutes speech
  • Repeat, repeat and….repeat, staying away from the notes more and more.
  • Once comfortable, deliver the speech.
  • A last trick : I usually spend (and I also read something similar) an average of 1 hour per minute of speech. For the speech mentioned in this post, 6 minutes of speech means around 6 hours of preparation.
Sunday, 15 May 2016 20:13:00 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
Public Speaking
# Monday, 22 February 2016

ID-10089859How many meetings do you attend ?
From that number of attended meeting, do you read the minutes you receive ?

Then, imagine that the meeting minutes are sent several days after the meeting took place, as a e-mail’s attachment. Chances are that you will not even open that attachment, and process this e-mail, by either deleting it, or moving it in one archive or another.
The goal of a meeting minute, except keeping what was discussed during the meeting on virtual paper, it is also to follow-up on action items, or as a reminder for the participant that something has to be done. Rarely it is used to come back to see who decided of what, and when. For that purpose, I often see a manager maintaining a register of all the decisions and actions in a big Excel file, growing to hundreds or thousands of lines, that even the manager can’t find anything in it.
But, the real problem is that the reasons why actions or decisions were taken are completely missing from the minutes. And, when time comes to blame someone because a previous decision appears to be a wrong one, that same manager will search in the thousands lines to find the guilty guy, but he will never find the reasons and the context of that decisions. They are lost forever. So, we end up with the kind of following statement in a meeting minute :

#

Subject

Type (Information, Action, Decision)

By When

Who

1

To implement a SharePoint Content Type Hub for the intranet in order to categorize the content.

D

25.01.2016

Charlie Crews

 

The problem with such statement in meeting minutes and the contained decisions is that once written, they are completely separated from their context. As we know, contexts change, making the original decisions obsolete and wrong. Unfortunately, unless the meeting minute is written with a lot of details, the importance of the decision’s context will be forgotten. Additionally, in order to ease the reading of the minutes, items in meeting minutes tend to be short and rather dry, omitting many elements, regardless of their importance, and therefore opening the door to interpretation. Text is just too linear to describe correctly a reasoning or the different explored paths to the decision. Finally, the interpretation will occur at least twice, the first one at writing time, and then when reading the meeting minutes.

In that previous example, unfortunately, the implementation of the SharePoint Content Type Hub didn’t deliver its promises and several month later, looking at the meeting minutes, one discovered that the decision to use a Content Type Hub was taken by the poor Charlie Crews who is now in trouble to justify this decision. Obviously, nobody remember why this decision was taken and the discussions that took place before stating this in the minutes.

So, the question here is, can we avoid this kind of situation, and how this can be achieved ?

Since a year, I started working on ways to capture the reasoning behind decisions made by a group of people or, just to write down all the elements before taking myself a decisions. I am a big fan of the pen and paper way of writing the notes, but, when it is is time to share it with others, the only way to get the same understanding from all the people is to share the same notation. For that purpose, I discovered a little more than a year ago the IBIS notation, that I used for my notes. The good point of this notation is its ease to model the decision making process because of its simple notation, and also the fact that you absolutely don’t need any software to use it. Indeed, a pen and paper do the job well. Also, because its simplicity, there is no need to learn during several days how to understand the different element and icons of the notation, they are pretty straightforward.

I don’t want to enter into the description of the IBIS notation element, but, rather, demonstrate how the example above could be addressed using such technique. Also, I would like to emphasize that it is only an example which does not, even if taken from a real project example, describe the real element or argumentation of any decision of that project. In other words, the goal is not to discuss whether the pros and cons of using a SharePoint Content Type Hub are correct or not in the example. And, to end the “disclaimer”, I am still improving my usage of the notation, so, what is shown below may not be exactly in line with IBIS and dialog mapping (which is another further step in practicing IBIS).

Back to the meeting minutes problem, here is an example of how the decision could have been modeled :

ibismeetingminutes

Again, this model may not be complete, but, it gives an idea of how a decision could come up. First, on the left-most end, what is called the “root question”, or, in other words, the question or the problem that needs to be answered. In our example, it is “What is the best way to apply metadata to documents ?”. When debating of that question during the meeting, several answers will be given by the different participants. Each of these answers have benefits, and, on the opposite, drawbacks. All of these elements are also gathered and linked to their related answers. As an example, “not using Content Type at all” also means that “no standardization of metadata” or template is possible.
Are all of these arguments valid ? Well, if there is discussion about an argument, it also has to be present in the diagram, as, again, one of the goals of the diagram is to be transparent and to show when there is disagreement. Another positive point is the neutrality of the diagram, as there is no name associated to an idea, argument or question. Which means that it puts all the participants at the same level.
Then, for one question or problem, several ideas or answers are provided. And, for each of the ideas, pros and cons are also captured on the diagram, but, yet, the question is : how does it help in taking the right decision ?

As mentioned earlier in this post, it is important to keep track of the context and reasons for a decision. That is why, at the bottom of the diagram, there is a question about the solution selection criteria, with answers, that I have put in descendant order of importance : “Centralized Control”, “Search Improvement”, and “Minimal training”. What this describes is that, at the time of the meeting was held, the most important criterion was to have a central place for the management of the Content Types.

Then, instead of sending a word document containing the meeting minutes with context-less decisions, sending the map of the meeting will have the following advantages :

  • Even people that are not familiar with IBIS can understand the simple icons and notation
  • People can also easily understand why such or such decisions was taken
  • Meeting’s participant should not worry about their association with arguments
  • Afterwards, if the decision appears to be the wrong one, a good part of the analysis has been done and don’t need to be done from scratch to find a good alternative. Only a review of the existing analysis can be done in order to update the selection criteria, pros, and cons and potentially new ideas.

6 months later, when everything went bad, coming back to this kind of meeting minute will show and demonstrate that the context or the rationale of that decision. From that, either it will be discovered that the decision was not the worst one, or, that environment changed as well as the requirements, leading to another decision to be taken. Another benefit is that Charlie Crews does not appear in the diagram, which means that decision was taken (normally) collegially.

Monday, 22 February 2016 21:44:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
IBIS
# Thursday, 14 January 2016

The 9th of december last year (announcements on Scott Hanselman blog and the .NET foundation), Windows Live Writer became Open Live Writer and at the same time became open source through the .NET Foundation. For different licensing and complexity hurdles, several features were removed, and, for the time being, the team is focusing on Windows 10, but it works under Windows 8 for me without problems.

What a good job they did !

That said, as I am working on several computers and actively using OneDrive, I was used to have the local draft folder on OneDrive too. Thus, I was wondering if it was possible to use the same trick as WLW 2012, and adding the PostsDirectory key in the registry in order to set the draft folder.

For Windows Live Writer 2012, the PostsDirectory registry key was under HKCU\Software\Microsoft\Windows Live\Writer .

But, first because it is no longer Microsoft providing this very useful tool and also because it is no longer par of the Live Essentials suite, the registry key had to be added in another location.

Simple things are always the most efficient, there is no need to try long time or search for hours where to add the key, it is simply under HKCU\Software\OpenLiveWriter . Then, restart OLW, if needed, in order to reload the new value of the parameter, and the drafts will be saved in that new location.

Oh, maybe one important thing : if you get your OneDrive files from another computer through OD synchronization, make your drafts “Available Offline”. Indeed, for me, OLW was not able to see the drafts until I made them available locally.

Thursday, 14 January 2016 01:11:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -

# Thursday, 01 October 2015

After the install of the Zachman Framework MDG Add-in for Sparx Enterprise Architect, there is an issue when we want to open the provided sample, ZF Example.

EA complains that it "couldn't lock file".

One way to successfully open this file is to run EA as an administrator, but, it is not the most convenient way. It is better to open windows explorer, select the "Program Files (x86)\Sparx Systems\MDG Technology\Zachman" folder, and change the permissions on it. Give "Full Control" to all the users, and it will enable the opening of the example.

Thursday, 01 October 2015 10:57:19 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
Enterprise Architect
# Monday, 01 June 2015

Recently, I had several times the request to get the GUID of a term in a SharePoint Term Store. And, unless you have access to the package that deployed the terms, you need to use PowerShell or write a quick console app to get them.

Unfortunately, I didn't have access to the server, which meant no PowerShell or console app.

But, I tried to see if from the user interface it was possible to get the term IDs. And, the answer is : YES, it is possible.

For that, open the "Term Store Management Tool" and open the "Developer Tools" (with IE). Go over the list of terms, and for the term you want the GUID, check the "id" attribute of its "<li>" tag. It is the GUID of the term.

In the same way, you can get the GUID of the parent, up to the Term Set ID and Term Group ID, enabling to have the whole hierarchy of IDs.

Monday, 01 June 2015 12:13:59 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -

# Monday, 02 February 2015

It's been quite a while, but no, this blog is not dead. Indeed, the last post was written at the last SharePoint Conference, which was a great one, once again. From that time, a lot have evolved, being in the community, the approach adopted by Microsoft regarding SharePoint and its other technologies, and, last but not least, in the projects I was involved in and the roles I played in these projects.

In the course of last year, I have been working on a very large SharePoint collaboration platform project which made me being away from the blog and other social networks. A lot of experience and knowledge can be shared, and, I hope, will be shared on this blog. At the beginning of my career, I always wanted to share what I was learning on the projects or what I was reading on the web when I thought it would be useful. My first blog post went online in October 2003, on the Blogger platform, more than one year after I opened my website with my own domain name. The experiences I published during this period were more about BizTalk, and SharePoint was absolutely not in my radar. Since then, I moved from BizTalk to MFC, COM, then .NET to finally embark on the SharePoint boat, it was at the SharePoint Conference 2007 in Berlin.

Again, my role changed a bit, which took me a bit more away from the development and technical activities than I expected it would be. The topics shared on this blog were rather technical and developer oriented, which kind of made me waiting for a bit for being back on the technical side to continue writing more in the line of this site. Lately, I realized that, in the fields I am working now, there is also a need to share experiences, and the bell rang last week, when I turned 40 (yes, time flies…). So, what will happen to this blog ?

First, SharePoint will not be the only topic available on this blog. The posts will not talk only about developments, and the content will be extended to functional and enterprise architecture. When looking on the web about these topics, I personally think there is a room for new or additional content. During the last months, I was involved in a lot of functional meetings and workshops in order to gather the needs or feedbacks from users. One of the consequences is that I had to find techniques to capture what I was listening to, and I started to apply new disciplines. One of them is dialogue mapping with IBIS. Another is an extensive usage of Enterprise Architect, from the requirements through physical data models. Just to name few of them. Therefore, expect to see more of these topics on this blog.

In addition to the changes explained before, and as the title of this post suggests, the site and the blog need both to be renewed with a new design, with more interesting content, especially for the web site. Not yet 100% sure, but likely, this would move to Azure.

On the other side, in 2014, I had the pleasure to attend and participate to a certain number of event. One of them was to speak at the Microsoft ALM Day in Lausanne in December, and, earlier, to attend the SharePoint Conference 2014. I am really looking forward seeing what is going to happen with the Ignite Conference. Apparently, this mega-conference in Chicago will be a content gold-mine. Also, because Office 2016 will be at the corner (I was told that some internal build of both Office and SharePoint would be available to some lucky people in the next coming weeks). Unfortunately, for the time being, I didn't plan to fly to Chicago, and the European SharePoint Conference seems also compromised (just a matter of bad timing). This will probably be the occasion for me to focus and increase my involvement in public speaking at different event.

As you can see, there is a lot to come, and I commit to continue maintaining this blog alive with interesting content, and hope you will stay tuned. Thanks for reading !

Monday, 02 February 2015 07:48:26 (GMT Standard Time, UTC+00:00)  #    Comments [0] -

# Thursday, 06 March 2014

SS

Speaker : Ricky Kirkham

Updating a SharePoint app is necessary, of course, to fix a bug, but also to bring new functionality. In the past, solutions and features were rarely updated. Mainly because it was simpler to replace, and recycling the farm was required. But a replace strategy was less cost effective.

Because the developers don't know who are their customers (store apps), there is a need for notification when there is an update of the App. For non-store apps, migrations is harder. App web domains are different from an old and a new app.

App update is deployed in an app package, but has a different version number. A message says to the user that there is an update for an app. It is possible to directly update an app by going in the callout of the app.

It is not possible to force users to update. In some situations, the users may not have sufficient permissions. You can't assume that all instances of the app are in the previous version. Only one version of the app can be in the store. A consequence is that in an update scenario, it can't be assumed that the update is not an install. The app package has to support an initial install AND an update from any version. The version number is the only way for an app to see if there is a previous version of the app is deployed. If the update process does not successfully add a component, it simple won't be installed. It means that there are potentially inconsistencies even if the app has the same version, because an update on an instance failed to install a simple component. But, they will all have the same version number ! It is, by mistake, possible that different updates add the same column twice or more.

Best Practices 1 : Test the update with all the different previous versions of the app. So, install each version in different subweb of the test site and test the update on every one of them.

Best Practices 2 : Napa does not support updating app. VS is a must use. Rollback on error, and for all components, and data. It is almost automatic, but the developer has to help. The version number of the app manifest must be changed and the app package must be uploaded. It may be necessary to update the AppPermissionRequests and AppPrerequisites sections.

All app web components are in a single feature. Updating an app web means updating the manifest. The VS feature designer does not display the elements the update, so it is necessary to disable the feature designer.

Best Practice 3 : Use the same version for the feature that you use for the app. <ElementManifests> is only processed on a clean install. <UpgradeActions> is processed on both clean install and update. <CustomUpgradeActions> are not applicable to apps. Update actions should not reoccur in further versions. That is the purpose of the <VersionRange> markup.

Best Practice 4 : Do not use a BeginRange attribute. There are two ways to update the host web : descriptive markup or code. Only two kinds of components that can be deployed via markup : app part and custom action. But the good side, is that it is using a complete replace of the previous version.

Best Practice 5 : When updating an app part, change the Namr property of the ClientWebPart. Whole-for-whole replacement logic is only application when there is no risk of data loss. For provider-hosted app, the update of the remote component is a separate update of the SharePoint app.

Best Practice 6 : Changes to remote components must not break older versions of the app. For example, if the new version of the remote page introduces new features, it will be available directly to the users, but will break as not all SharePoint components will be available.

Best Practice 7 : Pass the app version number as a query to the remote page in the URL query. This is to avoid the issue of the previous point. If all the changes are on remote components, don't update the SharePoint app. When on a single-tenant provider hosted app, updates is part of the same event. An Updated Event Handler is a remote event receiver, and, using CSOM or REST can do anything. It is registered in the app manifest and executed at the end of the update process. It can provide custom logic to update remote databases or host web components.

Best Practice 8 : Catch all errors in the update event handler. Because the update infrastructure does not know about exceptions raised in the event handler. The code has to rollback the updates.

Best Practice 9 : When there is an error, the code must rollback what the update did. This is typically done in the catch block of the remote event handler. If lucky, the backup and restore mechanism can be used. But, the rollback needs to take in account that the previous version was not necessarily the latest. Therefore, more than one rollback block is required, similarly to the update path.

Best Practice 10 : If you add a component to an app in an Upgraded Event Handler handler, be sure to add the same code to an Installed Event handler. Because a component that must be deployed during an update must also be deployed during a brand new installation. But, in the full install code, there is no need for testing the version number

Thursday, 06 March 2014 21:46:21 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Richard Harbridge

Solutions should be rapidly deployed and easy to update. It is important to have a SharePoint solution available externally and working on any device. In order for a solution to be adopted, it needs to be regularly updated and iterated. Also, it must be available anywhere on any device. Now, a solution that is not on mobile will get less or no adoption. In order to answer fast to the demand, the existing must be leveraged.

Doing a pros-cons of buy vs build is not that helpful as when it comes to SharePoint, it is not so simple. So, there is a need to map the needs of the organization to the best technologies. But it is not easy as well, as there is a plethora of technologies. SharePoint has multiple options, such as online and on-premises, different versions or edition. Moreover, there are 3.4 million of developers, which means a huge number of partners. In addition to that, there are so many products, filling sometimes the same gaps. Instead of doing a buy vs build, go through an assessment process, in which the needs are evaluated as well as the capability of the organization. Capability also means internal resources. If not, is there an existing piece that exists on the market, and investigate if it is possible to use it. More important, it is to know how to build and how to buy pieces. A solution and its ecosystem need to be constantly evaluated.

Two kind of solutions : user driven or IT driven. Implementing SharePoint is to allow business users to develop and implement solutions without the involvement of IT. The best way is to start simple. Because now everything is now an app, it helps user to get empowered. From an IT perspective, SharePoint is highly extendable.

Do not build a SharePoint solution if an Office App can do the job, or the data should not be stored in SharePoint. A typical scenario is storing relational data in a list rather than a database. If there are many to many relationships, it definitely has to be stored in a database. When implementing a solution that could be fit by another product, clearly define the limit from which it would be better to go with the product and no longer implement it in SharePoint. SharePoint can still be used to validate some concepts.

Before buying a 3rd party solution it is crucial to understand the needs. After, is there a practical OOB solution ? The process of buying a 3rd party solution can be compared to a sales qualification process. First, identify the needs, define if there are OOB options that can be used. If not, establish a type of products that would help and vendors that would be candidate. In order to compare in a right way, questionnaire must be established, before, maybe entering into negotiations and purchasing.

Nice web sites are available giving reviews of SharePoint solutions : PinPoint, SharePointReviews or even the Office Store. To get feedbacks on products, analysts, customers and consultants are valuable, as well as vendor whitepapers that can sometimes be biased.

 

Thursday, 06 March 2014 21:31:22 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14
Ranked #1 as
French-speaking SharePoint
Community Influencer 2013
Currently Reading :
I was there :
I was there :
I was exhibiting at :
I was there :
I was a speaker at :
I was a speaker at :
I was a speaker at
(January 2013 session):
I was a speaker at :
I was a speaker at :
United Nations (UN) SharePoint Event 2011
I was a speaker at :
I was there !
I was there !
I was there !
I was there !
Archive
<2017 February>
SunMonTueWedThuFriSat
2930311234
567891011
12131415161718
19202122232425
2627281234
567891011
Listed On :
Blogroll
[Feed] Weblogger.ch
[Feed] David Chappell :: Weblog
[Feed] RockyH - Security First!
[Feed] The Project Management Podcast™
[Feed] Lunch over IP
[Feed] Intellectual Hedonism
[Feed] Upgrade to Biztalk 2006
[Feed] BizTalk Server Team Blog
[Feed] Eric Cote
[Feed] Mario Cardinal
[Feed] BizTalk Server Performance
[Feed] Julia Lerman Blog - Don't Be Iffy...
[Feed] Dotnet Fox
[Feed] Joel on Software
[Feed] Kevin Lam's WebLog
[Feed] BizTalk 101 - Back to Basics
[Feed] Peter Himschoot's blog
[Feed] Guy Barrette
[Feed] Mark Harrison
[Feed] Chanian, Raj
[Feed] A BizTalk Enthusiast
[Feed] Kevin B Smith's WebLog
[Feed] JABLOG
[Feed] BizTalk Core Engine's WebLog
[Feed] Robert Rijsdijk's BizTalk Server Weblogs
[Feed] Bryant Likes's Blog
[Feed] {CaptainK} - a.k.a Suresh Kumar
[Feed] CaPo's .NET and Enterprise Servers adventures - by Carlo Poli
[Feed] Charles Young
[Feed] Christoph .NET
[Feed] ComputerZen.com - Scott Hanselman's Weblog
[Feed] Console.WriteLine("Hello World");
[Feed] Darrell Norton's Blog
[Feed] Darren Jefford
[Feed] Dot Net Dunk
[Feed] Gilles' WebLog
[Feed] Jan Tielens' Bloggings
[Feed] Lamont Harrington's Blog
[Feed] Lamont Harrington's Blog
[Feed] Luke Hutteman's Weblog
[Feed] Matt Meleski's .Net Blog - The ABC's of .NET
[Feed] Michael Platt's WebLog
[Feed] Mike Holdorf's Blog
[Feed] Mike Taulty's Weblog
[Feed] Neopoleon.com
[Feed] Owen Allen
[Feed] Scott Woodgate's E-Business Outbursts
[Feed] Stephen W. Thomas
[Feed] The Arch Hacker's BizTalk Blog
[Feed] The BizTalk Visionary - BizTalk 2004, SOA and on
[Feed] Trace of Thought (Scott Colestock)
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2017
Yves Peneveyre
Sign In
Statistics
Total Posts: 286
This Year: 0
This Month: 0
This Week: 0
Comments: 18
Themes
Pick a theme:
All Content © 2017, Yves Peneveyre
DasBlog theme 'Business' created by Christoph De Baene (delarou)