Your .NET and Microsoft technologies specialist in Western Switzerland RSS 2.0
# Monday, 22 February 2016

ID-10089859How many meetings do you attend ?
From that number of attended meeting, do you read the minutes you receive ?

Then, imagine that the meeting minutes are sent several days after the meeting took place, as a e-mail’s attachment. Chances are that you will not even open that attachment, and process this e-mail, by either deleting it, or moving it in one archive or another.
The goal of a meeting minute, except keeping what was discussed during the meeting on virtual paper, it is also to follow-up on action items, or as a reminder for the participant that something has to be done. Rarely it is used to come back to see who decided of what, and when. For that purpose, I often see a manager maintaining a register of all the decisions and actions in a big Excel file, growing to hundreds or thousands of lines, that even the manager can’t find anything in it.
But, the real problem is that the reasons why actions or decisions were taken are completely missing from the minutes. And, when time comes to blame someone because a previous decision appears to be a wrong one, that same manager will search in the thousands lines to find the guilty guy, but he will never find the reasons and the context of that decisions. They are lost forever. So, we end up with the kind of following statement in a meeting minute :

#

Subject

Type (Information, Action, Decision)

By When

Who

1

To implement a SharePoint Content Type Hub for the intranet in order to categorize the content.

D

25.01.2016

Charlie Crews

 

The problem with such statement in meeting minutes and the contained decisions is that once written, they are completely separated from their context. As we know, contexts change, making the original decisions obsolete and wrong. Unfortunately, unless the meeting minute is written with a lot of details, the importance of the decision’s context will be forgotten. Additionally, in order to ease the reading of the minutes, items in meeting minutes tend to be short and rather dry, omitting many elements, regardless of their importance, and therefore opening the door to interpretation. Text is just too linear to describe correctly a reasoning or the different explored paths to the decision. Finally, the interpretation will occur at least twice, the first one at writing time, and then when reading the meeting minutes.

In that previous example, unfortunately, the implementation of the SharePoint Content Type Hub didn’t deliver its promises and several month later, looking at the meeting minutes, one discovered that the decision to use a Content Type Hub was taken by the poor Charlie Crews who is now in trouble to justify this decision. Obviously, nobody remember why this decision was taken and the discussions that took place before stating this in the minutes.

So, the question here is, can we avoid this kind of situation, and how this can be achieved ?

Since a year, I started working on ways to capture the reasoning behind decisions made by a group of people or, just to write down all the elements before taking myself a decisions. I am a big fan of the pen and paper way of writing the notes, but, when it is is time to share it with others, the only way to get the same understanding from all the people is to share the same notation. For that purpose, I discovered a little more than a year ago the IBIS notation, that I used for my notes. The good point of this notation is its ease to model the decision making process because of its simple notation, and also the fact that you absolutely don’t need any software to use it. Indeed, a pen and paper do the job well. Also, because its simplicity, there is no need to learn during several days how to understand the different element and icons of the notation, they are pretty straightforward.

I don’t want to enter into the description of the IBIS notation element, but, rather, demonstrate how the example above could be addressed using such technique. Also, I would like to emphasize that it is only an example which does not, even if taken from a real project example, describe the real element or argumentation of any decision of that project. In other words, the goal is not to discuss whether the pros and cons of using a SharePoint Content Type Hub are correct or not in the example. And, to end the “disclaimer”, I am still improving my usage of the notation, so, what is shown below may not be exactly in line with IBIS and dialog mapping (which is another further step in practicing IBIS).

Back to the meeting minutes problem, here is an example of how the decision could have been modeled :

ibismeetingminutes

Again, this model may not be complete, but, it gives an idea of how a decision could come up. First, on the left-most end, what is called the “root question”, or, in other words, the question or the problem that needs to be answered. In our example, it is “What is the best way to apply metadata to documents ?”. When debating of that question during the meeting, several answers will be given by the different participants. Each of these answers have benefits, and, on the opposite, drawbacks. All of these elements are also gathered and linked to their related answers. As an example, “not using Content Type at all” also means that “no standardization of metadata” or template is possible.
Are all of these arguments valid ? Well, if there is discussion about an argument, it also has to be present in the diagram, as, again, one of the goals of the diagram is to be transparent and to show when there is disagreement. Another positive point is the neutrality of the diagram, as there is no name associated to an idea, argument or question. Which means that it puts all the participants at the same level.
Then, for one question or problem, several ideas or answers are provided. And, for each of the ideas, pros and cons are also captured on the diagram, but, yet, the question is : how does it help in taking the right decision ?

As mentioned earlier in this post, it is important to keep track of the context and reasons for a decision. That is why, at the bottom of the diagram, there is a question about the solution selection criteria, with answers, that I have put in descendant order of importance : “Centralized Control”, “Search Improvement”, and “Minimal training”. What this describes is that, at the time of the meeting was held, the most important criterion was to have a central place for the management of the Content Types.

Then, instead of sending a word document containing the meeting minutes with context-less decisions, sending the map of the meeting will have the following advantages :

  • Even people that are not familiar with IBIS can understand the simple icons and notation
  • People can also easily understand why such or such decisions was taken
  • Meeting’s participant should not worry about their association with arguments
  • Afterwards, if the decision appears to be the wrong one, a good part of the analysis has been done and don’t need to be done from scratch to find a good alternative. Only a review of the existing analysis can be done in order to update the selection criteria, pros, and cons and potentially new ideas.

6 months later, when everything went bad, coming back to this kind of meeting minute will show and demonstrate that the context or the rationale of that decision. From that, either it will be discovered that the decision was not the worst one, or, that environment changed as well as the requirements, leading to another decision to be taken. Another benefit is that Charlie Crews does not appear in the diagram, which means that decision was taken (normally) collegially.

Monday, 22 February 2016 21:44:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
IBIS
# Thursday, 14 January 2016

The 9th of december last year (announcements on Scott Hanselman blog and the .NET foundation), Windows Live Writer became Open Live Writer and at the same time became open source through the .NET Foundation. For different licensing and complexity hurdles, several features were removed, and, for the time being, the team is focusing on Windows 10, but it works under Windows 8 for me without problems.

What a good job they did !

That said, as I am working on several computers and actively using OneDrive, I was used to have the local draft folder on OneDrive too. Thus, I was wondering if it was possible to use the same trick as WLW 2012, and adding the PostsDirectory key in the registry in order to set the draft folder.

For Windows Live Writer 2012, the PostsDirectory registry key was under HKCU\Software\Microsoft\Windows Live\Writer .

But, first because it is no longer Microsoft providing this very useful tool and also because it is no longer par of the Live Essentials suite, the registry key had to be added in another location.

Simple things are always the most efficient, there is no need to try long time or search for hours where to add the key, it is simply under HKCU\Software\OpenLiveWriter . Then, restart OLW, if needed, in order to reload the new value of the parameter, and the drafts will be saved in that new location.

Oh, maybe one important thing : if you get your OneDrive files from another computer through OD synchronization, make your drafts “Available Offline”. Indeed, for me, OLW was not able to see the drafts until I made them available locally.

Thursday, 14 January 2016 01:11:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -

# Thursday, 01 October 2015

After the install of the Zachman Framework MDG Add-in for Sparx Enterprise Architect, there is an issue when we want to open the provided sample, ZF Example.

EA complains that it "couldn't lock file".

One way to successfully open this file is to run EA as an administrator, but, it is not the most convenient way. It is better to open windows explorer, select the "Program Files (x86)\Sparx Systems\MDG Technology\Zachman" folder, and change the permissions on it. Give "Full Control" to all the users, and it will enable the opening of the example.

Thursday, 01 October 2015 10:57:19 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
Enterprise Architect
# Monday, 01 June 2015

Recently, I had several times the request to get the GUID of a term in a SharePoint Term Store. And, unless you have access to the package that deployed the terms, you need to use PowerShell or write a quick console app to get them.

Unfortunately, I didn't have access to the server, which meant no PowerShell or console app.

But, I tried to see if from the user interface it was possible to get the term IDs. And, the answer is : YES, it is possible.

For that, open the "Term Store Management Tool" and open the "Developer Tools" (with IE). Go over the list of terms, and for the term you want the GUID, check the "id" attribute of its "<li>" tag. It is the GUID of the term.

In the same way, you can get the GUID of the parent, up to the Term Set ID and Term Group ID, enabling to have the whole hierarchy of IDs.

Monday, 01 June 2015 12:13:59 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -

# Monday, 02 February 2015

It's been quite a while, but no, this blog is not dead. Indeed, the last post was written at the last SharePoint Conference, which was a great one, once again. From that time, a lot have evolved, being in the community, the approach adopted by Microsoft regarding SharePoint and its other technologies, and, last but not least, in the projects I was involved in and the roles I played in these projects.

In the course of last year, I have been working on a very large SharePoint collaboration platform project which made me being away from the blog and other social networks. A lot of experience and knowledge can be shared, and, I hope, will be shared on this blog. At the beginning of my career, I always wanted to share what I was learning on the projects or what I was reading on the web when I thought it would be useful. My first blog post went online in October 2003, on the Blogger platform, more than one year after I opened my website with my own domain name. The experiences I published during this period were more about BizTalk, and SharePoint was absolutely not in my radar. Since then, I moved from BizTalk to MFC, COM, then .NET to finally embark on the SharePoint boat, it was at the SharePoint Conference 2007 in Berlin.

Again, my role changed a bit, which took me a bit more away from the development and technical activities than I expected it would be. The topics shared on this blog were rather technical and developer oriented, which kind of made me waiting for a bit for being back on the technical side to continue writing more in the line of this site. Lately, I realized that, in the fields I am working now, there is also a need to share experiences, and the bell rang last week, when I turned 40 (yes, time flies…). So, what will happen to this blog ?

First, SharePoint will not be the only topic available on this blog. The posts will not talk only about developments, and the content will be extended to functional and enterprise architecture. When looking on the web about these topics, I personally think there is a room for new or additional content. During the last months, I was involved in a lot of functional meetings and workshops in order to gather the needs or feedbacks from users. One of the consequences is that I had to find techniques to capture what I was listening to, and I started to apply new disciplines. One of them is dialogue mapping with IBIS. Another is an extensive usage of Enterprise Architect, from the requirements through physical data models. Just to name few of them. Therefore, expect to see more of these topics on this blog.

In addition to the changes explained before, and as the title of this post suggests, the site and the blog need both to be renewed with a new design, with more interesting content, especially for the web site. Not yet 100% sure, but likely, this would move to Azure.

On the other side, in 2014, I had the pleasure to attend and participate to a certain number of event. One of them was to speak at the Microsoft ALM Day in Lausanne in December, and, earlier, to attend the SharePoint Conference 2014. I am really looking forward seeing what is going to happen with the Ignite Conference. Apparently, this mega-conference in Chicago will be a content gold-mine. Also, because Office 2016 will be at the corner (I was told that some internal build of both Office and SharePoint would be available to some lucky people in the next coming weeks). Unfortunately, for the time being, I didn't plan to fly to Chicago, and the European SharePoint Conference seems also compromised (just a matter of bad timing). This will probably be the occasion for me to focus and increase my involvement in public speaking at different event.

As you can see, there is a lot to come, and I commit to continue maintaining this blog alive with interesting content, and hope you will stay tuned. Thanks for reading !

Monday, 02 February 2015 07:48:26 (GMT Standard Time, UTC+00:00)  #    Comments [0] -

# Thursday, 06 March 2014

SS

Speaker : Ricky Kirkham

Updating a SharePoint app is necessary, of course, to fix a bug, but also to bring new functionality. In the past, solutions and features were rarely updated. Mainly because it was simpler to replace, and recycling the farm was required. But a replace strategy was less cost effective.

Because the developers don't know who are their customers (store apps), there is a need for notification when there is an update of the App. For non-store apps, migrations is harder. App web domains are different from an old and a new app.

App update is deployed in an app package, but has a different version number. A message says to the user that there is an update for an app. It is possible to directly update an app by going in the callout of the app.

It is not possible to force users to update. In some situations, the users may not have sufficient permissions. You can't assume that all instances of the app are in the previous version. Only one version of the app can be in the store. A consequence is that in an update scenario, it can't be assumed that the update is not an install. The app package has to support an initial install AND an update from any version. The version number is the only way for an app to see if there is a previous version of the app is deployed. If the update process does not successfully add a component, it simple won't be installed. It means that there are potentially inconsistencies even if the app has the same version, because an update on an instance failed to install a simple component. But, they will all have the same version number ! It is, by mistake, possible that different updates add the same column twice or more.

Best Practices 1 : Test the update with all the different previous versions of the app. So, install each version in different subweb of the test site and test the update on every one of them.

Best Practices 2 : Napa does not support updating app. VS is a must use. Rollback on error, and for all components, and data. It is almost automatic, but the developer has to help. The version number of the app manifest must be changed and the app package must be uploaded. It may be necessary to update the AppPermissionRequests and AppPrerequisites sections.

All app web components are in a single feature. Updating an app web means updating the manifest. The VS feature designer does not display the elements the update, so it is necessary to disable the feature designer.

Best Practice 3 : Use the same version for the feature that you use for the app. <ElementManifests> is only processed on a clean install. <UpgradeActions> is processed on both clean install and update. <CustomUpgradeActions> are not applicable to apps. Update actions should not reoccur in further versions. That is the purpose of the <VersionRange> markup.

Best Practice 4 : Do not use a BeginRange attribute. There are two ways to update the host web : descriptive markup or code. Only two kinds of components that can be deployed via markup : app part and custom action. But the good side, is that it is using a complete replace of the previous version.

Best Practice 5 : When updating an app part, change the Namr property of the ClientWebPart. Whole-for-whole replacement logic is only application when there is no risk of data loss. For provider-hosted app, the update of the remote component is a separate update of the SharePoint app.

Best Practice 6 : Changes to remote components must not break older versions of the app. For example, if the new version of the remote page introduces new features, it will be available directly to the users, but will break as not all SharePoint components will be available.

Best Practice 7 : Pass the app version number as a query to the remote page in the URL query. This is to avoid the issue of the previous point. If all the changes are on remote components, don't update the SharePoint app. When on a single-tenant provider hosted app, updates is part of the same event. An Updated Event Handler is a remote event receiver, and, using CSOM or REST can do anything. It is registered in the app manifest and executed at the end of the update process. It can provide custom logic to update remote databases or host web components.

Best Practice 8 : Catch all errors in the update event handler. Because the update infrastructure does not know about exceptions raised in the event handler. The code has to rollback the updates.

Best Practice 9 : When there is an error, the code must rollback what the update did. This is typically done in the catch block of the remote event handler. If lucky, the backup and restore mechanism can be used. But, the rollback needs to take in account that the previous version was not necessarily the latest. Therefore, more than one rollback block is required, similarly to the update path.

Best Practice 10 : If you add a component to an app in an Upgraded Event Handler handler, be sure to add the same code to an Installed Event handler. Because a component that must be deployed during an update must also be deployed during a brand new installation. But, in the full install code, there is no need for testing the version number

Thursday, 06 March 2014 21:46:21 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Richard Harbridge

Solutions should be rapidly deployed and easy to update. It is important to have a SharePoint solution available externally and working on any device. In order for a solution to be adopted, it needs to be regularly updated and iterated. Also, it must be available anywhere on any device. Now, a solution that is not on mobile will get less or no adoption. In order to answer fast to the demand, the existing must be leveraged.

Doing a pros-cons of buy vs build is not that helpful as when it comes to SharePoint, it is not so simple. So, there is a need to map the needs of the organization to the best technologies. But it is not easy as well, as there is a plethora of technologies. SharePoint has multiple options, such as online and on-premises, different versions or edition. Moreover, there are 3.4 million of developers, which means a huge number of partners. In addition to that, there are so many products, filling sometimes the same gaps. Instead of doing a buy vs build, go through an assessment process, in which the needs are evaluated as well as the capability of the organization. Capability also means internal resources. If not, is there an existing piece that exists on the market, and investigate if it is possible to use it. More important, it is to know how to build and how to buy pieces. A solution and its ecosystem need to be constantly evaluated.

Two kind of solutions : user driven or IT driven. Implementing SharePoint is to allow business users to develop and implement solutions without the involvement of IT. The best way is to start simple. Because now everything is now an app, it helps user to get empowered. From an IT perspective, SharePoint is highly extendable.

Do not build a SharePoint solution if an Office App can do the job, or the data should not be stored in SharePoint. A typical scenario is storing relational data in a list rather than a database. If there are many to many relationships, it definitely has to be stored in a database. When implementing a solution that could be fit by another product, clearly define the limit from which it would be better to go with the product and no longer implement it in SharePoint. SharePoint can still be used to validate some concepts.

Before buying a 3rd party solution it is crucial to understand the needs. After, is there a practical OOB solution ? The process of buying a 3rd party solution can be compared to a sales qualification process. First, identify the needs, define if there are OOB options that can be used. If not, establish a type of products that would help and vendors that would be candidate. In order to compare in a right way, questionnaire must be established, before, maybe entering into negotiations and purchasing.

Nice web sites are available giving reviews of SharePoint solutions : PinPoint, SharePointReviews or even the Office Store. To get feedbacks on products, analysts, customers and consultants are valuable, as well as vendor whitepapers that can sometimes be biased.

 

Thursday, 06 March 2014 21:31:22 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14
# Wednesday, 05 March 2014

SS

Speakers : Laura Rogers, Jennifer Mason

Adoption is all about knowing the users.

Tip 1 : Create meaningful navigation for users. The goal is to help users finding quickly what they are looking for. To facilitate the navigation, it is possible to edit the quick links directly from the browser. The default Seattle layout can be switched to the Oslo site layout. The Oslo layout removes the quick launch navigation helping focusing on the content. Managed navigation uses the managed metadata and term set. It can be used to have cross site collection navigation. Promoted links list help users navigating into the site.

Tip 2 : Create a personal experience. Different methods are available to personalize content : audience targeting, filtered views and out of the box web parts. Audiences can be used to show or hide webparts to specific group of users. But, audience is not security. All the filtered content can be surfaced on a single page, similarly to a dashboard where the webparts are using filtering.

Tip 3 : Drive process and automation of common tasks using workflow. Some automations that can be put in place are email notifications and scheduled reminders. SharePoint Designer is shown to describe how to create workflows in order, for example, send notification when something important happens. In SharePoint 2013, it is possible to have workflows with loops, which enables having reminder workflows.

Tip 4 : Design your site to encourage social interaction. Home page, social features, ratings and likes can help for this topic.

Tip 5 : Utilize existing content and apps within your solutions. The goal here is to really reuse apps that are available in the SharePoint store and not reinventing the wheel.

Tip 6 : Take advantage of Office integration with SharePoint. Using Office Client Links, Office Web Apps and Live Co-Authoring. The embedded content available in the document callout can be used to surface the preview of the document in a page.

Wednesday, 05 March 2014 21:13:59 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Edin Kapic

Scalability issues never appear when developing or testing. That is why it is better to design for scalability upfront.

Autohosted applications are not suited for scalable architecture, as it can't be fine tuned. And because Apps are now running outside SharePoint, the first advice is to minimize the round-trips. Deploying an App does not necessarily mean scalability. It must be architected for it.

A scalable architecture should include a CDN, blob storage, table storage and distributed cache.

3 Guidelines : avoid roundtrips (caching, cdn), avoid bottlenecks (NoSQL, Sharding, queue), avoid single point of failure (redundancy).

Caching is the cheapest mechanism to avoid roundtrips. Stale data is the drawback. Local cache sits on each instance of the server or the service. Distributed cache is shared across the servers. Very frequently accessed data and static data should go in the local cache. But, by default, distributed cache should be used. An example of an effective cache mechanism is the DNS. There is a mini cache in the browser, a local cache at the operating system level, and finally, also at the DNS resolver level.

CDNs are used to cache large blob data. Each blob can have a public URL (public blob). Shared signature is a part of the URL to access the private blobs. The first user accessing content from the CDN will pay the price of putting the blob in the cache of the CDN. Everything that is static, such as images, scripts, media files should go in the CDN. In order to ensure that the correct version of the blob is accessed, the URL can contain a version parameter.

Storage locks are a reason for bottlenecks. Database locks appear when changing and reading requests are mixed. While relational data and SQL Azure provides immediate consistency, NoSQL or Table Storage there is an eventual consistency. CQRS is a pattern that splits database operations in queries and commands for different processing. Queries can be optimized by parallezing, whereas commands can't. SharePoint 2013 uses more or less the same pattern. Search for queries are cached, other operations are done in the content database. Sharding is partitioning data across multiple database. The tenant ID in O365 is used as a partition ID. This is a way to go beyond the storage limitations. On the other side, making joins operations are more difficult.

Reducing bottlenecks can also be achieved by using queues. But, request/response model does not scale well, and gets expensive very fast. Queuing requests, we add decoupling and retries can be implemented. DDOS can be preventing. If the requests in a queue gets high, it can be scaled. Azure storage queues are low level and uses TCP/IP (end-to-end scenario). If you need a centralized queue system, it is better to use Service Bus queues. To notify the front end that a job has been done, use framework like Signal-R. Async is a way to optimize the requests so that a same process can serve more than just one request. Async can work even in a single thread. Having multiple thread is better in case there are multiple cores. Until .NET 4.5, it was a bit difficult to implement such solution.

In redundant design, the goal is to avoid to rely on a single node, as the app must continue working if a node goes down. In redundant apps, each requests must be idempotent. Load-balancing is an example of redundancy. Azure Traffic Manager maintains a table of the available nodes by keeping probing them to check if they are online. When a request comes to the traffic manager, it defines which server is the most appropriate before returning the address to the client.

 

Wednesday, 05 March 2014 21:06:26 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Paolo Pialorsi

In SharePoint 2013, authentication is done using Classic mode (Windows Identity), bringing an SPUser, or Claims which is translated into a Claims Based Identity. It is also possible to authenticate a used through SAML.

Classic mode authentication is considered as deprecated, and is only available from PowerShell. Legacy application should migrate to Claims. Claims based is the default mode. Claims based enables anonymous authentication, Windows authentication (NTLM or Kerberos), Forms based authentication (with membership API, LDAP or custom Provider) and Trusted Identity Provider (ADFS 2.0/3.0, Azure ACS, Azure Directory Service – AAD, or custom IdP/STS). It is now the default authentication mode.

The Identity Claims has a specific format. Something like the following :

i:0#.w|piasys\paolo => Windows Account

i:0#.f|fbamembership|paolo => FBA Account

i:05.t|piasys-acs|paolo@pialorsi.com => SAML Account

The first letter defines whether it is an identity claim (i) or an other kind of claim (c). Then : is the separator followed by 0 which is reserved. The following digit gives the claim type (# = logon name, 5 = email, a = username, ? = name identifier, ! = identity provider, - = role), followed by the value type of the claim (. = String). The following letter gives the claim issuer (w = Windows, t = trusted identity provider, f = forms authentication, g = custom claims provider).

This defines the claim issuer, the way the user authenticated, separators, issuer name and the claim value.

Windows Azure ACS 2.0 encompasses an identity provider and a security token services, which support different identity provider such as Facebook and Google or any other Customer WS-Federation compliant provider. Specifications supported by ACS 2.0 are OAuth 2.0, WS-Trust, WS-Federation, SAML 1.1/2.0 and JSON web token (JWT).

When trusting an external identity provider, only claims are sent. SharePoint uses claims provider to get claim values. It has 3 out of the box claim providers, for Active Directory, Forms based authentication and any kind of identity provider. To develop custom claim provider, it has to inherit from SPClaimProvider, which provides methods for Name resolution, claim augmentation, etc. A claim provider requires a farm solution (so, not suitable for Office 365).

App authentication is supported only for CSOM or REST API requests.

3 app authentication model : internal app authentication used by SharePoint-Hosted apps, external app authentication via OAuth supported by O365, and external app authentication via server to server which is only supported on-premises.

Server to Server is also called High-Trust authentication, but it does not mean Full Trust. It establishes a direct trust between the servers and is based on a X.509 certificates. It is available for provider-hosted apps and configurable using PowerShell.

For authorization, SPUser and SPGroup classes, inheriting from SPPrincipal, are the main actors, that are almost the same as in SharePoint 2010. They define user or group principals to which it can be given explicit permissions to site, lists, items, etc. The authorizations rely on permission levels that are nothing else than a set of permissions.

Apps are not users and are granted as all or nothing. App can include permissions requests. If the app is granted, the app can be installed. If only one request is not granted, the app will not be installed. Permissions cannot be changed after assignment, but only be revoked. App has only full control on its own app web. Permissions are targeting scopes and rights, such as site collection, list and read, read-write, manage or full-control. The rights can also target a specific service. Rights and scopes are not customizable.

Wednesday, 05 March 2014 20:55:50 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Agnes Molnar, Israel Vega Jr

Information architecture is a science of organizing and labelling content, in order to find it easily. How much does it cost not finding information ?

Governance is important. So, without governance there is no naming convention (document, document library), no structure (everything stored in the Shared Documents library) or not knowing where to store a new document, leading to the creation of new libraries and duplication. Some organization is needed.

Naming convention implies content type definition, terms along with the correct structure.

Some IA design starting points : Either user interface or randomly, cost, business needs, willingness to train, politics, IT restrictions.

The IA components are Document IDs, sites, etc, etc. SharePoint offers a wide range of possibilities of components. Start, for example, with the metadata and content types, then navigation components. But, the trend is going to a Search driven navigation. Several master pages may be needed to display different kind of content. Visual design can help organizing the visual organization of the information.

Never modify out-of-the-box content types such as Item/Document. But, inherit from the Document content type to create a company document, and subsequently legal, sales and finance document content types. Each content type can be associated with different term groups or policies (i.e. content disposition).

Logical navigation is typically the top navigation, secondary navigation, current, recent and breadcrumb. Physical navigation is structural, such as the quick links.

Home pages normally just surface content from other location of the site. Summary of summaries are content rollups. Detail pages are content.

Metadata may have different meanings depending of the context, so, be careful. On the opposite, several fields may have the same meaning. That is why there is the notion of crawled and mapped properties. Crawled properties are metadata that can be understood by SharePoint, but may not make any sense for the user. Managed properties are metadata used to group crawled properties that have the same meaning.

The recommended IA process is to start to ask what we are trying to do, why, and also how to know it is right or wrong.

During the SharePoint implementation planning, user workshops, input from IT, design and some vision are needed. It is not needed to be right the first time. Planning is crucial. For starting a migration, first, make an inventory of the existing (size, metadata, owners, security). Cleanup may be necessary. It is important to know what is and is not working today.

Cloud has seriously to be considered. It is not necessary to model everything though. It is better to have something than anything.

Document lifecycle needs to be standardized, thinking about governance first. Authoring is done in many places. For example, creation can be done at a single location, but displayed in many different places.

When migrating to the Cloud, challenges are the followings : managed paths, multiple web applications, host header site collections, custom site definitions and large content.

Another motivation is to make the Search better, which is based on crawling, indexing, ranking and results display. Admins have to work on the optimization. But, users are responsible of the content.

For an hybrid integration, it means separating the workloads, for example sending new content to the cloud, and keeping old content on-premises. Whether a content should go in the cloud or not also depends of the licensing. MySites can go in the cloud directly. Department collaboration can go to the cloud, depending of the sensitivity of the information. In a hybrid taxonomy, on-premises and cloud ones are different.

To understand if it is going to the right direction, everything must be measured. Measured are used to improve the things.

Wednesday, 05 March 2014 20:35:44 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Cathy Dew

Responsive web design reacts to screen size and orientation. Adaptive web design adjusts the content in the design. Usually, for SharePoint, she starts with 3 different breakpoints : 1024px, 768px and 320px.

Don't try to make mysites responsive !

Design Manager is not good for intranets (lists, etc). Usually, responsive design is deployed on-premise through a full-trust solution, while on Office 365, it is not possible.

Responsive design implementation is based on grids. So, using grid based layouts. Also, this grid will be flexible.

The key is to make everything flexible, such as images that can resize. But, there are some limitations with IE7 and lower.

Media queries are based on media types defined by the w3c. A media query targets a device based on screen resolution and orientation. Ensure the navigation consistency, which becomes more and more important as the device gets smaller. In SharePoint, there are the top, left and breadcrumb navigation. How is there are going to be translated on the devices ? For example, the quick navigation may disappear if the screen is not big enough. What to do with the ribbon ? Maybe it is not needed on a smartphone, but required on a desktop.

Start from the smartphone version.

Step one, the wireframes to separate design from functionalities and not focus on little design details. Also, decide how content will be displayed (not design). The wireframes already have the grid in overlay to help the transition to the mockups. The most important content must be above the scroll level, especially on mobile devices, to avoid to have to scroll down to reach content.

Wireframe tools : Balsamiq, Visio, Adobe Creative Suite and Axure

From the wireframes, create the mockups, with the grids in the background. Also for the different screen resolutions. The SharePoint elements must be clearly identified, especially if the mockups are handed over to a development team in order to avoid them having to guess what they are. Developing design for Office 365 is working with a moving target.

 

Wednesday, 05 March 2014 18:21:44 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Eric Overfield, Rita Zhang

2007 has seen the first release of the iPhone, leading the change in the way people consumed the informations. This also changed the way people were doing designs and user experience. With the increasing number of devices, it is no longer possible to have one site per device.

Responsive design is a concept of using one web site being able to show on every kind of devices, screens or browsers. Two methodologies, progressive enhancement and graceful degradation. Responsive web design is in the middle, which is encompassed in adaptive web design. Progressive enhancement is mobile first, while graceful degradation is desktop first.

Responsive design relies on fluid design, which means that elements have relative dimensions, in percentage, rather than absolute values. It also relies on the media queries (@media in CSS). The media query allows to define the screen sizes and orientation. Be careful, IE8 does not support media queries. For IE8, a specific stylesheet has to be provided, but, normally IE8 is only available on desktop, reducing the amount of work to adapt the design.

It is highly recommended to adopt a mobile first approach and starting building code for mobile interfaces. Also, you can have more control on the resources and already avoid using big images. This helps focusing on content first.

Regarding navigation, it has to be adapted to the different viewports, and what kind of navigation to use (dynamic or static). For mobile navigation, a complete touch navigation experience has to be implemented.

Begin with site planning, such as content planning, site map, information architecture. The wireframes need to include mobile devices, along with the mockups. At the same time, design for the extreme. Always remember SharePoint, and decide what will be part of the Master Page and Page Layout. How to handle the navigation is a frequent question.

After the wireframes and high fidelity mockups, it is time for an html prototype. It is possible to define your own grid, or to reuse an existing framework and leverage the experience of other developers. It obviously saves time and budget. Many of these frameworks provide extra features, such as collapsing navigation for example. Nevertheless, it can take time to ramp up on a framework and may not be SharePoint ready.

Some frameworks : Twitter Bootstrap. Zurb, Skeleton, Less frameworks.

Moving from the html prototype to SharePoint, it has to be split between the Master Page and Page Layouts. It is possible to reuse the SharePoint components from the snippets gallery.

A key advice (among others cited) : develop for the real world (will mobile users need to edit pages ?)

Wednesday, 05 March 2014 17:00:51 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14
# Tuesday, 04 March 2014

SS

Speaker : Sanjay Narang, Luca Bandinelli

The requirement is to build an internet facing site, highly customized and the needs to be always on. So, minimum downtime and data loss. At which level high availability is needed ? This has also to cover the natural disasters, which implies different data center location. Azure has a connectivity time of 99.95%. If you want more, the solution has to be designed accordingly.

O365 is the place to go for collaboration, but it is not the case for Internet scenarios. Therefore, Azure is the good option and is able to scale on-demand. SharePoint solution on top of Azure is a Microsoft supported solution. Specific features, such as blob storage, fast cross-dc transfer will be very useful.

The solution is based on two different farms in two different Windows Azure regions, using a custom log shipping jobs for data synchronization (and not SQL Always-On). Also, traffic manager will be used.

Content and Management database will be synchronized. Search will have 2 search services, one for the production, one for the DR.

Virtual networks is a challenge as they are restricted to a single datacenter. Also, an AD cannot span multiple DCs. Therefore, each farms will be in different domains, preventing the use of SQL Always-On. Also, a domain trust has to be setup.

The primary farm in a Windows Azure will have an affinity group, in which a virtual network will be defined. Different cloud services will be defined containing the virtual machines. But, each of these elements need to be always available, using an availability set. For front-end servers, Windows Azure Load Balancer can be used. For SQL Server, an Always On Availability Group will be setup, with an Availability Group Listener Group. But, this implies having all the clients in a different Cloud Service. For custom log backups, blob storage will be used.

The DR farm is similar to the primary farm. The custom log shipping job will take the backup from the blob storage. The content DBs and MMS DB are read-only and not part of Always-On AG. The search is created separately and crawls the read-only content DBs and must be scheduled outside of the restore window time.

Custom Log shipping is required on both farms. The backup and restore commands will use an URL for the storage. The challenges of having two farms with different AD is that accounts are different from one farm to the other. Doing a backup/restore will therefore not work. The DR required accounts must be added. Once it is done, it has to be backed up and restored on the primary farm, thus containing the accounts of the DR farm.

For search, log shipping can't be used. Having a separate search services allows to keep the SLAs and not requires to copy the indexes. But, having this setup makes the search analytics not usable (at the global level).

The main component enabling failover is the Azure Traffic Manager. Requests will always be directed to the primary endpoints while it is available. A custom job will poll the TM to check whether the target endpoint has changed. When the primary farm goes down, the TM detects it and redirect the request to the DR farm, which is read-only. The custom job also detects it as well and pauses the restore job to enable read-write accesses. TM takes 90 seconds to detect a farm is not available. When the TM has switched to the DR farm, we need to prevent it to come back to the primary farm when it is back online, as this farm is no longer primary.

Issue now is that when the DR is permanently switched, there is no DR anymore. It has to be rebuilt, similarly to how it was done for the original DR farm. During the patching, the DR can be used temporarily, but, think about SLA, as the DR will be read-only. Consider also using the Content Delivery Network to cache the pages and other content.

Tuesday, 04 March 2014 21:37:05 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Sonya Koptyev, Greg Lindhorst

After the announcement of the InfoPath discontinuation, it was expected to have a session quite full, and it was indeed the case. Many people seeking for information about the future of Forms.

4 main scenarios were presented.

Excel Surveys, with which questionnaire can be designed and proposed to the users for filling. For each question, a column is added in the Excel worksheet. The different data types are supported and the editor is simple to used.

A brand new feature, which was apparently showed for the first time : FoSL (Forms on SharePoint List). This feature, available from the ribbon, next to the InfoPath "Customize Forms" button, is opening an editor showing the already available fields, coming from the list. The designer allows the user to place the fields where he wants on the design surface, and also to resize them. In list editing mode, the form is displayed with a user interface that is similar to the one used in Access Services.

Another way to publish forms is to use structured documents, in other words, a Word document containing fields.

The last possibility is App forms or Access Services.

All the presented solutions are for information workers, and do not use developments or code (no CSR, LightSwitch or Visual Studio)

Currently, the alternatives are multiple, from Nintex or Formotus, just to name two of them.

A roadmap presented for the next year, and the features are not yet frozen as the community inputs are very welcome. InfoPath will stay for a while, and will be supported until 2023.

There is currently no migration tool or techniques, and Microsoft is thinking about what can be done.

Tuesday, 04 March 2014 21:34:09 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Rafal Lukawiecki

Data mining is about exploring and finding correlations between data. It also can be used to do predictions and to find patterns. But, predictions does not mean predicting the future. Predicting the future means making strong assumptions that nothing will change around you.

Predictive analytics is understanding the customers and building effective marketing campaigns.

In order to do data mining, the data must have some structure, having attributes, flags, etc. But, you have to flatten the data or de-normalize the data structures, which means potentially a lot of data with a lot of different columns.

As an output, there are analysis, such as a risk of fraud or happiness. Another output can be just clusters or groups.

3 steps are necessary, defining the model (input and output), train the model, and validating the results that is likely the most important.

From the data, the data mining engine feeds a mining model.

On the backend, SQL Server with Analysis Services are required, starting with the version 2008. Starting 2012, SSAS comes in two flavor : multidimensional and tabular. But for data mining, no cube is needed.

On the frontend, only Excel is needed plus the free Data Mining add-Ins. The data for the Data Mining Add-Ins must reside in the Excel sheet. SQL Server Data Tool might be used to manage data mining projects. Additionally, SQL Server Management Studio may be helpful as well.

For model validation and statistics, R is the reference (http://cran.r-project.org/), bringing additional statistics tools no available in Excel or SQL.

An excellent presentation with an excellent enthusiastic speaker !

 

Tuesday, 04 March 2014 08:57:17 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14
Ranked #1 as
French-speaking SharePoint
Community Influencer 2013
Currently Reading :
I was there :
I was there :
I was exhibiting at :
I was there :
I was a speaker at :
I was a speaker at :
I was a speaker at
(January 2013 session):
I was a speaker at :
I was a speaker at :
United Nations (UN) SharePoint Event 2011
I was a speaker at :
I was there !
I was there !
I was there !
I was there !
Archive
<2016 February>
SunMonTueWedThuFriSat
31123456
78910111213
14151617181920
21222324252627
282912345
6789101112
Listed On :
Blogroll
[Feed] Weblogger.ch
[Feed] David Chappell :: Weblog
[Feed] RockyH - Security First!
[Feed] The Project Management Podcast™
[Feed] Lunch over IP
[Feed] Intellectual Hedonism
[Feed] Upgrade to Biztalk 2006
[Feed] BizTalk Server Team Blog
[Feed] Eric Cote
[Feed] Mario Cardinal
[Feed] BizTalk Server Performance
[Feed] Julia Lerman Blog - Don't Be Iffy...
[Feed] Dotnet Fox
[Feed] Joel on Software
[Feed] Kevin Lam's WebLog
[Feed] BizTalk 101 - Back to Basics
[Feed] Peter Himschoot's blog
[Feed] Guy Barrette
[Feed] Mark Harrison
[Feed] Chanian, Raj
[Feed] A BizTalk Enthusiast
[Feed] Kevin B Smith's WebLog
[Feed] JABLOG
[Feed] BizTalk Core Engine's WebLog
[Feed] Robert Rijsdijk's BizTalk Server Weblogs
[Feed] Bryant Likes's Blog
[Feed] {CaptainK} - a.k.a Suresh Kumar
[Feed] CaPo's .NET and Enterprise Servers adventures - by Carlo Poli
[Feed] Charles Young
[Feed] Christoph .NET
[Feed] ComputerZen.com - Scott Hanselman's Weblog
[Feed] Console.WriteLine("Hello World");
[Feed] Darrell Norton's Blog
[Feed] Darren Jefford
[Feed] Dot Net Dunk
[Feed] Gilles' WebLog
[Feed] Jan Tielens' Bloggings
[Feed] Lamont Harrington's Blog
[Feed] Lamont Harrington's Blog
[Feed] Luke Hutteman's Weblog
[Feed] Matt Meleski's .Net Blog - The ABC's of .NET
[Feed] Michael Platt's WebLog
[Feed] Mike Holdorf's Blog
[Feed] Mike Taulty's Weblog
[Feed] Neopoleon.com
[Feed] Owen Allen
[Feed] Scott Woodgate's E-Business Outbursts
[Feed] Stephen W. Thomas
[Feed] The Arch Hacker's BizTalk Blog
[Feed] The BizTalk Visionary - BizTalk 2004, SOA and on
[Feed] Trace of Thought (Scott Colestock)
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2017
Yves Peneveyre
Sign In
Statistics
Total Posts: 286
This Year: 0
This Month: 0
This Week: 0
Comments: 18
Themes
Pick a theme:
All Content © 2017, Yves Peneveyre
DasBlog theme 'Business' created by Christoph De Baene (delarou)