Your .NET and Microsoft technologies specialist in Western Switzerland RSS 2.0
# Thursday, 06 March 2014

SS

Speaker : Ricky Kirkham

Updating a SharePoint app is necessary, of course, to fix a bug, but also to bring new functionality. In the past, solutions and features were rarely updated. Mainly because it was simpler to replace, and recycling the farm was required. But a replace strategy was less cost effective.

Because the developers don't know who are their customers (store apps), there is a need for notification when there is an update of the App. For non-store apps, migrations is harder. App web domains are different from an old and a new app.

App update is deployed in an app package, but has a different version number. A message says to the user that there is an update for an app. It is possible to directly update an app by going in the callout of the app.

It is not possible to force users to update. In some situations, the users may not have sufficient permissions. You can't assume that all instances of the app are in the previous version. Only one version of the app can be in the store. A consequence is that in an update scenario, it can't be assumed that the update is not an install. The app package has to support an initial install AND an update from any version. The version number is the only way for an app to see if there is a previous version of the app is deployed. If the update process does not successfully add a component, it simple won't be installed. It means that there are potentially inconsistencies even if the app has the same version, because an update on an instance failed to install a simple component. But, they will all have the same version number ! It is, by mistake, possible that different updates add the same column twice or more.

Best Practices 1 : Test the update with all the different previous versions of the app. So, install each version in different subweb of the test site and test the update on every one of them.

Best Practices 2 : Napa does not support updating app. VS is a must use. Rollback on error, and for all components, and data. It is almost automatic, but the developer has to help. The version number of the app manifest must be changed and the app package must be uploaded. It may be necessary to update the AppPermissionRequests and AppPrerequisites sections.

All app web components are in a single feature. Updating an app web means updating the manifest. The VS feature designer does not display the elements the update, so it is necessary to disable the feature designer.

Best Practice 3 : Use the same version for the feature that you use for the app. <ElementManifests> is only processed on a clean install. <UpgradeActions> is processed on both clean install and update. <CustomUpgradeActions> are not applicable to apps. Update actions should not reoccur in further versions. That is the purpose of the <VersionRange> markup.

Best Practice 4 : Do not use a BeginRange attribute. There are two ways to update the host web : descriptive markup or code. Only two kinds of components that can be deployed via markup : app part and custom action. But the good side, is that it is using a complete replace of the previous version.

Best Practice 5 : When updating an app part, change the Namr property of the ClientWebPart. Whole-for-whole replacement logic is only application when there is no risk of data loss. For provider-hosted app, the update of the remote component is a separate update of the SharePoint app.

Best Practice 6 : Changes to remote components must not break older versions of the app. For example, if the new version of the remote page introduces new features, it will be available directly to the users, but will break as not all SharePoint components will be available.

Best Practice 7 : Pass the app version number as a query to the remote page in the URL query. This is to avoid the issue of the previous point. If all the changes are on remote components, don't update the SharePoint app. When on a single-tenant provider hosted app, updates is part of the same event. An Updated Event Handler is a remote event receiver, and, using CSOM or REST can do anything. It is registered in the app manifest and executed at the end of the update process. It can provide custom logic to update remote databases or host web components.

Best Practice 8 : Catch all errors in the update event handler. Because the update infrastructure does not know about exceptions raised in the event handler. The code has to rollback the updates.

Best Practice 9 : When there is an error, the code must rollback what the update did. This is typically done in the catch block of the remote event handler. If lucky, the backup and restore mechanism can be used. But, the rollback needs to take in account that the previous version was not necessarily the latest. Therefore, more than one rollback block is required, similarly to the update path.

Best Practice 10 : If you add a component to an app in an Upgraded Event Handler handler, be sure to add the same code to an Installed Event handler. Because a component that must be deployed during an update must also be deployed during a brand new installation. But, in the full install code, there is no need for testing the version number

Thursday, 06 March 2014 21:46:21 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Richard Harbridge

Solutions should be rapidly deployed and easy to update. It is important to have a SharePoint solution available externally and working on any device. In order for a solution to be adopted, it needs to be regularly updated and iterated. Also, it must be available anywhere on any device. Now, a solution that is not on mobile will get less or no adoption. In order to answer fast to the demand, the existing must be leveraged.

Doing a pros-cons of buy vs build is not that helpful as when it comes to SharePoint, it is not so simple. So, there is a need to map the needs of the organization to the best technologies. But it is not easy as well, as there is a plethora of technologies. SharePoint has multiple options, such as online and on-premises, different versions or edition. Moreover, there are 3.4 million of developers, which means a huge number of partners. In addition to that, there are so many products, filling sometimes the same gaps. Instead of doing a buy vs build, go through an assessment process, in which the needs are evaluated as well as the capability of the organization. Capability also means internal resources. If not, is there an existing piece that exists on the market, and investigate if it is possible to use it. More important, it is to know how to build and how to buy pieces. A solution and its ecosystem need to be constantly evaluated.

Two kind of solutions : user driven or IT driven. Implementing SharePoint is to allow business users to develop and implement solutions without the involvement of IT. The best way is to start simple. Because now everything is now an app, it helps user to get empowered. From an IT perspective, SharePoint is highly extendable.

Do not build a SharePoint solution if an Office App can do the job, or the data should not be stored in SharePoint. A typical scenario is storing relational data in a list rather than a database. If there are many to many relationships, it definitely has to be stored in a database. When implementing a solution that could be fit by another product, clearly define the limit from which it would be better to go with the product and no longer implement it in SharePoint. SharePoint can still be used to validate some concepts.

Before buying a 3rd party solution it is crucial to understand the needs. After, is there a practical OOB solution ? The process of buying a 3rd party solution can be compared to a sales qualification process. First, identify the needs, define if there are OOB options that can be used. If not, establish a type of products that would help and vendors that would be candidate. In order to compare in a right way, questionnaire must be established, before, maybe entering into negotiations and purchasing.

Nice web sites are available giving reviews of SharePoint solutions : PinPoint, SharePointReviews or even the Office Store. To get feedbacks on products, analysts, customers and consultants are valuable, as well as vendor whitepapers that can sometimes be biased.

 

Thursday, 06 March 2014 21:31:22 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14
# Wednesday, 05 March 2014

SS

Speakers : Laura Rogers, Jennifer Mason

Adoption is all about knowing the users.

Tip 1 : Create meaningful navigation for users. The goal is to help users finding quickly what they are looking for. To facilitate the navigation, it is possible to edit the quick links directly from the browser. The default Seattle layout can be switched to the Oslo site layout. The Oslo layout removes the quick launch navigation helping focusing on the content. Managed navigation uses the managed metadata and term set. It can be used to have cross site collection navigation. Promoted links list help users navigating into the site.

Tip 2 : Create a personal experience. Different methods are available to personalize content : audience targeting, filtered views and out of the box web parts. Audiences can be used to show or hide webparts to specific group of users. But, audience is not security. All the filtered content can be surfaced on a single page, similarly to a dashboard where the webparts are using filtering.

Tip 3 : Drive process and automation of common tasks using workflow. Some automations that can be put in place are email notifications and scheduled reminders. SharePoint Designer is shown to describe how to create workflows in order, for example, send notification when something important happens. In SharePoint 2013, it is possible to have workflows with loops, which enables having reminder workflows.

Tip 4 : Design your site to encourage social interaction. Home page, social features, ratings and likes can help for this topic.

Tip 5 : Utilize existing content and apps within your solutions. The goal here is to really reuse apps that are available in the SharePoint store and not reinventing the wheel.

Tip 6 : Take advantage of Office integration with SharePoint. Using Office Client Links, Office Web Apps and Live Co-Authoring. The embedded content available in the document callout can be used to surface the preview of the document in a page.

Wednesday, 05 March 2014 21:13:59 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Edin Kapic

Scalability issues never appear when developing or testing. That is why it is better to design for scalability upfront.

Autohosted applications are not suited for scalable architecture, as it can't be fine tuned. And because Apps are now running outside SharePoint, the first advice is to minimize the round-trips. Deploying an App does not necessarily mean scalability. It must be architected for it.

A scalable architecture should include a CDN, blob storage, table storage and distributed cache.

3 Guidelines : avoid roundtrips (caching, cdn), avoid bottlenecks (NoSQL, Sharding, queue), avoid single point of failure (redundancy).

Caching is the cheapest mechanism to avoid roundtrips. Stale data is the drawback. Local cache sits on each instance of the server or the service. Distributed cache is shared across the servers. Very frequently accessed data and static data should go in the local cache. But, by default, distributed cache should be used. An example of an effective cache mechanism is the DNS. There is a mini cache in the browser, a local cache at the operating system level, and finally, also at the DNS resolver level.

CDNs are used to cache large blob data. Each blob can have a public URL (public blob). Shared signature is a part of the URL to access the private blobs. The first user accessing content from the CDN will pay the price of putting the blob in the cache of the CDN. Everything that is static, such as images, scripts, media files should go in the CDN. In order to ensure that the correct version of the blob is accessed, the URL can contain a version parameter.

Storage locks are a reason for bottlenecks. Database locks appear when changing and reading requests are mixed. While relational data and SQL Azure provides immediate consistency, NoSQL or Table Storage there is an eventual consistency. CQRS is a pattern that splits database operations in queries and commands for different processing. Queries can be optimized by parallezing, whereas commands can't. SharePoint 2013 uses more or less the same pattern. Search for queries are cached, other operations are done in the content database. Sharding is partitioning data across multiple database. The tenant ID in O365 is used as a partition ID. This is a way to go beyond the storage limitations. On the other side, making joins operations are more difficult.

Reducing bottlenecks can also be achieved by using queues. But, request/response model does not scale well, and gets expensive very fast. Queuing requests, we add decoupling and retries can be implemented. DDOS can be preventing. If the requests in a queue gets high, it can be scaled. Azure storage queues are low level and uses TCP/IP (end-to-end scenario). If you need a centralized queue system, it is better to use Service Bus queues. To notify the front end that a job has been done, use framework like Signal-R. Async is a way to optimize the requests so that a same process can serve more than just one request. Async can work even in a single thread. Having multiple thread is better in case there are multiple cores. Until .NET 4.5, it was a bit difficult to implement such solution.

In redundant design, the goal is to avoid to rely on a single node, as the app must continue working if a node goes down. In redundant apps, each requests must be idempotent. Load-balancing is an example of redundancy. Azure Traffic Manager maintains a table of the available nodes by keeping probing them to check if they are online. When a request comes to the traffic manager, it defines which server is the most appropriate before returning the address to the client.

 

Wednesday, 05 March 2014 21:06:26 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Paolo Pialorsi

In SharePoint 2013, authentication is done using Classic mode (Windows Identity), bringing an SPUser, or Claims which is translated into a Claims Based Identity. It is also possible to authenticate a used through SAML.

Classic mode authentication is considered as deprecated, and is only available from PowerShell. Legacy application should migrate to Claims. Claims based is the default mode. Claims based enables anonymous authentication, Windows authentication (NTLM or Kerberos), Forms based authentication (with membership API, LDAP or custom Provider) and Trusted Identity Provider (ADFS 2.0/3.0, Azure ACS, Azure Directory Service – AAD, or custom IdP/STS). It is now the default authentication mode.

The Identity Claims has a specific format. Something like the following :

i:0#.w|piasys\paolo => Windows Account

i:0#.f|fbamembership|paolo => FBA Account

i:05.t|piasys-acs|paolo@pialorsi.com => SAML Account

The first letter defines whether it is an identity claim (i) or an other kind of claim (c). Then : is the separator followed by 0 which is reserved. The following digit gives the claim type (# = logon name, 5 = email, a = username, ? = name identifier, ! = identity provider, - = role), followed by the value type of the claim (. = String). The following letter gives the claim issuer (w = Windows, t = trusted identity provider, f = forms authentication, g = custom claims provider).

This defines the claim issuer, the way the user authenticated, separators, issuer name and the claim value.

Windows Azure ACS 2.0 encompasses an identity provider and a security token services, which support different identity provider such as Facebook and Google or any other Customer WS-Federation compliant provider. Specifications supported by ACS 2.0 are OAuth 2.0, WS-Trust, WS-Federation, SAML 1.1/2.0 and JSON web token (JWT).

When trusting an external identity provider, only claims are sent. SharePoint uses claims provider to get claim values. It has 3 out of the box claim providers, for Active Directory, Forms based authentication and any kind of identity provider. To develop custom claim provider, it has to inherit from SPClaimProvider, which provides methods for Name resolution, claim augmentation, etc. A claim provider requires a farm solution (so, not suitable for Office 365).

App authentication is supported only for CSOM or REST API requests.

3 app authentication model : internal app authentication used by SharePoint-Hosted apps, external app authentication via OAuth supported by O365, and external app authentication via server to server which is only supported on-premises.

Server to Server is also called High-Trust authentication, but it does not mean Full Trust. It establishes a direct trust between the servers and is based on a X.509 certificates. It is available for provider-hosted apps and configurable using PowerShell.

For authorization, SPUser and SPGroup classes, inheriting from SPPrincipal, are the main actors, that are almost the same as in SharePoint 2010. They define user or group principals to which it can be given explicit permissions to site, lists, items, etc. The authorizations rely on permission levels that are nothing else than a set of permissions.

Apps are not users and are granted as all or nothing. App can include permissions requests. If the app is granted, the app can be installed. If only one request is not granted, the app will not be installed. Permissions cannot be changed after assignment, but only be revoked. App has only full control on its own app web. Permissions are targeting scopes and rights, such as site collection, list and read, read-write, manage or full-control. The rights can also target a specific service. Rights and scopes are not customizable.

Wednesday, 05 March 2014 20:55:50 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Agnes Molnar, Israel Vega Jr

Information architecture is a science of organizing and labelling content, in order to find it easily. How much does it cost not finding information ?

Governance is important. So, without governance there is no naming convention (document, document library), no structure (everything stored in the Shared Documents library) or not knowing where to store a new document, leading to the creation of new libraries and duplication. Some organization is needed.

Naming convention implies content type definition, terms along with the correct structure.

Some IA design starting points : Either user interface or randomly, cost, business needs, willingness to train, politics, IT restrictions.

The IA components are Document IDs, sites, etc, etc. SharePoint offers a wide range of possibilities of components. Start, for example, with the metadata and content types, then navigation components. But, the trend is going to a Search driven navigation. Several master pages may be needed to display different kind of content. Visual design can help organizing the visual organization of the information.

Never modify out-of-the-box content types such as Item/Document. But, inherit from the Document content type to create a company document, and subsequently legal, sales and finance document content types. Each content type can be associated with different term groups or policies (i.e. content disposition).

Logical navigation is typically the top navigation, secondary navigation, current, recent and breadcrumb. Physical navigation is structural, such as the quick links.

Home pages normally just surface content from other location of the site. Summary of summaries are content rollups. Detail pages are content.

Metadata may have different meanings depending of the context, so, be careful. On the opposite, several fields may have the same meaning. That is why there is the notion of crawled and mapped properties. Crawled properties are metadata that can be understood by SharePoint, but may not make any sense for the user. Managed properties are metadata used to group crawled properties that have the same meaning.

The recommended IA process is to start to ask what we are trying to do, why, and also how to know it is right or wrong.

During the SharePoint implementation planning, user workshops, input from IT, design and some vision are needed. It is not needed to be right the first time. Planning is crucial. For starting a migration, first, make an inventory of the existing (size, metadata, owners, security). Cleanup may be necessary. It is important to know what is and is not working today.

Cloud has seriously to be considered. It is not necessary to model everything though. It is better to have something than anything.

Document lifecycle needs to be standardized, thinking about governance first. Authoring is done in many places. For example, creation can be done at a single location, but displayed in many different places.

When migrating to the Cloud, challenges are the followings : managed paths, multiple web applications, host header site collections, custom site definitions and large content.

Another motivation is to make the Search better, which is based on crawling, indexing, ranking and results display. Admins have to work on the optimization. But, users are responsible of the content.

For an hybrid integration, it means separating the workloads, for example sending new content to the cloud, and keeping old content on-premises. Whether a content should go in the cloud or not also depends of the licensing. MySites can go in the cloud directly. Department collaboration can go to the cloud, depending of the sensitivity of the information. In a hybrid taxonomy, on-premises and cloud ones are different.

To understand if it is going to the right direction, everything must be measured. Measured are used to improve the things.

Wednesday, 05 March 2014 20:35:44 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Cathy Dew

Responsive web design reacts to screen size and orientation. Adaptive web design adjusts the content in the design. Usually, for SharePoint, she starts with 3 different breakpoints : 1024px, 768px and 320px.

Don't try to make mysites responsive !

Design Manager is not good for intranets (lists, etc). Usually, responsive design is deployed on-premise through a full-trust solution, while on Office 365, it is not possible.

Responsive design implementation is based on grids. So, using grid based layouts. Also, this grid will be flexible.

The key is to make everything flexible, such as images that can resize. But, there are some limitations with IE7 and lower.

Media queries are based on media types defined by the w3c. A media query targets a device based on screen resolution and orientation. Ensure the navigation consistency, which becomes more and more important as the device gets smaller. In SharePoint, there are the top, left and breadcrumb navigation. How is there are going to be translated on the devices ? For example, the quick navigation may disappear if the screen is not big enough. What to do with the ribbon ? Maybe it is not needed on a smartphone, but required on a desktop.

Start from the smartphone version.

Step one, the wireframes to separate design from functionalities and not focus on little design details. Also, decide how content will be displayed (not design). The wireframes already have the grid in overlay to help the transition to the mockups. The most important content must be above the scroll level, especially on mobile devices, to avoid to have to scroll down to reach content.

Wireframe tools : Balsamiq, Visio, Adobe Creative Suite and Axure

From the wireframes, create the mockups, with the grids in the background. Also for the different screen resolutions. The SharePoint elements must be clearly identified, especially if the mockups are handed over to a development team in order to avoid them having to guess what they are. Developing design for Office 365 is working with a moving target.

 

Wednesday, 05 March 2014 18:21:44 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Eric Overfield, Rita Zhang

2007 has seen the first release of the iPhone, leading the change in the way people consumed the informations. This also changed the way people were doing designs and user experience. With the increasing number of devices, it is no longer possible to have one site per device.

Responsive design is a concept of using one web site being able to show on every kind of devices, screens or browsers. Two methodologies, progressive enhancement and graceful degradation. Responsive web design is in the middle, which is encompassed in adaptive web design. Progressive enhancement is mobile first, while graceful degradation is desktop first.

Responsive design relies on fluid design, which means that elements have relative dimensions, in percentage, rather than absolute values. It also relies on the media queries (@media in CSS). The media query allows to define the screen sizes and orientation. Be careful, IE8 does not support media queries. For IE8, a specific stylesheet has to be provided, but, normally IE8 is only available on desktop, reducing the amount of work to adapt the design.

It is highly recommended to adopt a mobile first approach and starting building code for mobile interfaces. Also, you can have more control on the resources and already avoid using big images. This helps focusing on content first.

Regarding navigation, it has to be adapted to the different viewports, and what kind of navigation to use (dynamic or static). For mobile navigation, a complete touch navigation experience has to be implemented.

Begin with site planning, such as content planning, site map, information architecture. The wireframes need to include mobile devices, along with the mockups. At the same time, design for the extreme. Always remember SharePoint, and decide what will be part of the Master Page and Page Layout. How to handle the navigation is a frequent question.

After the wireframes and high fidelity mockups, it is time for an html prototype. It is possible to define your own grid, or to reuse an existing framework and leverage the experience of other developers. It obviously saves time and budget. Many of these frameworks provide extra features, such as collapsing navigation for example. Nevertheless, it can take time to ramp up on a framework and may not be SharePoint ready.

Some frameworks : Twitter Bootstrap. Zurb, Skeleton, Less frameworks.

Moving from the html prototype to SharePoint, it has to be split between the Master Page and Page Layouts. It is possible to reuse the SharePoint components from the snippets gallery.

A key advice (among others cited) : develop for the real world (will mobile users need to edit pages ?)

Wednesday, 05 March 2014 17:00:51 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14
# Tuesday, 04 March 2014

SS

Speaker : Sanjay Narang, Luca Bandinelli

The requirement is to build an internet facing site, highly customized and the needs to be always on. So, minimum downtime and data loss. At which level high availability is needed ? This has also to cover the natural disasters, which implies different data center location. Azure has a connectivity time of 99.95%. If you want more, the solution has to be designed accordingly.

O365 is the place to go for collaboration, but it is not the case for Internet scenarios. Therefore, Azure is the good option and is able to scale on-demand. SharePoint solution on top of Azure is a Microsoft supported solution. Specific features, such as blob storage, fast cross-dc transfer will be very useful.

The solution is based on two different farms in two different Windows Azure regions, using a custom log shipping jobs for data synchronization (and not SQL Always-On). Also, traffic manager will be used.

Content and Management database will be synchronized. Search will have 2 search services, one for the production, one for the DR.

Virtual networks is a challenge as they are restricted to a single datacenter. Also, an AD cannot span multiple DCs. Therefore, each farms will be in different domains, preventing the use of SQL Always-On. Also, a domain trust has to be setup.

The primary farm in a Windows Azure will have an affinity group, in which a virtual network will be defined. Different cloud services will be defined containing the virtual machines. But, each of these elements need to be always available, using an availability set. For front-end servers, Windows Azure Load Balancer can be used. For SQL Server, an Always On Availability Group will be setup, with an Availability Group Listener Group. But, this implies having all the clients in a different Cloud Service. For custom log backups, blob storage will be used.

The DR farm is similar to the primary farm. The custom log shipping job will take the backup from the blob storage. The content DBs and MMS DB are read-only and not part of Always-On AG. The search is created separately and crawls the read-only content DBs and must be scheduled outside of the restore window time.

Custom Log shipping is required on both farms. The backup and restore commands will use an URL for the storage. The challenges of having two farms with different AD is that accounts are different from one farm to the other. Doing a backup/restore will therefore not work. The DR required accounts must be added. Once it is done, it has to be backed up and restored on the primary farm, thus containing the accounts of the DR farm.

For search, log shipping can't be used. Having a separate search services allows to keep the SLAs and not requires to copy the indexes. But, having this setup makes the search analytics not usable (at the global level).

The main component enabling failover is the Azure Traffic Manager. Requests will always be directed to the primary endpoints while it is available. A custom job will poll the TM to check whether the target endpoint has changed. When the primary farm goes down, the TM detects it and redirect the request to the DR farm, which is read-only. The custom job also detects it as well and pauses the restore job to enable read-write accesses. TM takes 90 seconds to detect a farm is not available. When the TM has switched to the DR farm, we need to prevent it to come back to the primary farm when it is back online, as this farm is no longer primary.

Issue now is that when the DR is permanently switched, there is no DR anymore. It has to be rebuilt, similarly to how it was done for the original DR farm. During the patching, the DR can be used temporarily, but, think about SLA, as the DR will be read-only. Consider also using the Content Delivery Network to cache the pages and other content.

Tuesday, 04 March 2014 21:37:05 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Sonya Koptyev, Greg Lindhorst

After the announcement of the InfoPath discontinuation, it was expected to have a session quite full, and it was indeed the case. Many people seeking for information about the future of Forms.

4 main scenarios were presented.

Excel Surveys, with which questionnaire can be designed and proposed to the users for filling. For each question, a column is added in the Excel worksheet. The different data types are supported and the editor is simple to used.

A brand new feature, which was apparently showed for the first time : FoSL (Forms on SharePoint List). This feature, available from the ribbon, next to the InfoPath "Customize Forms" button, is opening an editor showing the already available fields, coming from the list. The designer allows the user to place the fields where he wants on the design surface, and also to resize them. In list editing mode, the form is displayed with a user interface that is similar to the one used in Access Services.

Another way to publish forms is to use structured documents, in other words, a Word document containing fields.

The last possibility is App forms or Access Services.

All the presented solutions are for information workers, and do not use developments or code (no CSR, LightSwitch or Visual Studio)

Currently, the alternatives are multiple, from Nintex or Formotus, just to name two of them.

A roadmap presented for the next year, and the features are not yet frozen as the community inputs are very welcome. InfoPath will stay for a while, and will be supported until 2023.

There is currently no migration tool or techniques, and Microsoft is thinking about what can be done.

Tuesday, 04 March 2014 21:34:09 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14

SS

Speaker : Rafal Lukawiecki

Data mining is about exploring and finding correlations between data. It also can be used to do predictions and to find patterns. But, predictions does not mean predicting the future. Predicting the future means making strong assumptions that nothing will change around you.

Predictive analytics is understanding the customers and building effective marketing campaigns.

In order to do data mining, the data must have some structure, having attributes, flags, etc. But, you have to flatten the data or de-normalize the data structures, which means potentially a lot of data with a lot of different columns.

As an output, there are analysis, such as a risk of fraud or happiness. Another output can be just clusters or groups.

3 steps are necessary, defining the model (input and output), train the model, and validating the results that is likely the most important.

From the data, the data mining engine feeds a mining model.

On the backend, SQL Server with Analysis Services are required, starting with the version 2008. Starting 2012, SSAS comes in two flavor : multidimensional and tabular. But for data mining, no cube is needed.

On the frontend, only Excel is needed plus the free Data Mining add-Ins. The data for the Data Mining Add-Ins must reside in the Excel sheet. SQL Server Data Tool might be used to manage data mining projects. Additionally, SQL Server Management Studio may be helpful as well.

For model validation and statistics, R is the reference (http://cran.r-project.org/), bringing additional statistics tools no available in Excel or SQL.

An excellent presentation with an excellent enthusiastic speaker !

 

Tuesday, 04 March 2014 08:57:17 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC14
# Wednesday, 22 May 2013

Title : Getting Started with SharePoint 2013

Author : Robert Crane

Summary :
This book explores the very first steps of SharePoint 2013, using a standard team site. It starts with an explanation of how to use document libraries, calendars and some other type of libraries or lists. Then, it finishes with the search and the recycle bin.

Book Review :
For the price of the book, there was not risk in having a look and reading the book. Unfortunately, it stays at the very basic level of the usage of only some of the library and list types. Yes, it explains how to upload a file, how to recover a file from the recycle bin, but, from my point of view, most of the things described in this book can be discovered by a user exploring the platform. Moreover, it stays explaining some of a team site features. In my opinion, this book can be skipped, and a reader that wants to explore SharePoint 2013 should rather go directly with a book like SharePoint 2013 for Dummies (which I haven't read yet) that will go beyond what Getting Started with SharePoint 2013 goes.

Wednesday, 22 May 2013 21:31:16 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
Book Review | SP2013
# Friday, 17 May 2013

Title : SharePoint 2013 – Planet of the Apps 2.0

Author : Sahil Malik

Summary :
SharePoint 2013 comes with a new development model, based on Apps. This book goes through the different kind of Apps, giving examples of each of them, explaining what Apps are and building the next examples on top of the previous App. It is an introductory book and it is not intended to be an in-depth one, going in all the details of Apps development. This is understandable looking at the topic and how vast it is.

Book Review :
The very good thing is that the book is written in such a way that you read it fast. It is not a 600 pages paving stone and to give an overview of SharePoint 2013 Apps, it is perfect. It starts with a really simple App, a SharePoint hosted, and going further, adds complexities and ends with Server-to-Server type of App, talking about permissions, Azure ACS and many aspects that a developer starting putting his hands in Apps development should know. That said, as some subjects are complex, some parts of the book should be read carefully and some time should be spent to really understand some notions before going forward to the next example or chapter. Additionally, the writing style is nice and Sahil uses a good humor to help digesting some topics.

For me, this is the book to start with (well, at the same time, I haven’t read many of Apps development book so far; it is coming…), giving the first steps to develop SharePoint 2013 Apps. It is short and long enough to get a nice understanding, and finally, it is fun. And remember, “Hash is legal in Amsterdam (almost)” (ref. to the first version of the book).

Friday, 17 May 2013 11:30:00 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
Book Review | SP2013
# Thursday, 16 May 2013

Error404After the setup of a new SharePoint 2013 environment, I started testing it by creating a really simple SharePoint-Hosted App, like a basic “Hello World” App. For this environment, I am using a Visual Studio 2012 development machine remotely from the SharePoint 2013 box. In order to test this very simplistic application, I just pressed F5 to launch the VS debugger and I landed on the SharePoint 2013 page, and was able to see my App in the quick launch menu. But when I clicked on the link, I got a nice “The resource cannot be found” (404), as show in the picture at the beginning of this post.

I checked several times the SharePoint 2013 App settings, such as the “App domain” URL and the “App prefix”, and they were correct. I also checked the DNS settings and the bindings to the IIS site and everything was perfect.

During my troubleshooting, I saw that deploying manually the App was working perfectly. It means that there was a difference between a deployment using VS and a manual one, or, the execution of the App.

Googling a bit, I found this post on the Microsoft forums : http://social.msdn.microsoft.com/Forums/en-US/appsforsharepoint/thread/188d78d8-8c35-46df-8770-695d1258ad18/

In this long thread, people are mentioning that they were adding a colon to the loopback IPv6 address that VS is adding in the hosts file (located in %Windir%\System32\drivers\etc ), making the address ::1 invalid. This indeed worked for me, but raised another question. VS was adding two IP addresses for the same host :

10.180.128.195 apps-0ba3bca00437eb.apps.myserver.mydomain.com
::1 apps-0ba3bca00437eb.apps.myserver.mydomain.com

 

Clearly, ::1 is the IPv6 equivalent of 127.0.0.1, but my App was not running locally, but on the 10.180.128.195 server. So, why the IPv6 was wrong and not equal to my SharePoint 2013 IPv6 address ?

While in debug mode, I replaced the ::1 address by the real IPv6 address of my SharePoint 2013. And………it worked like a charm.

So far, my theories are (and be cautious, because they need to be confirmed), coming from many different tests I did :

  • By adding a colon to the loopback IPv6 address, it makes it invalid (rfc5952). This causes my development machine falling back to IPv4 to connect the server.
  • The reason why VS adds the loopback IPv6 instead of the correct one is likely because it cannot resolve the host name with IPv6. And rather than not adding any entry, it adds the ::1 address.

As also written in the MSDN forum, to avoid having to every time manually change the hosts entries while in a debugging session, disabling IPv6 is a good way, and most probably not an issue for most of the people.

Thursday, 16 May 2013 11:38:00 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
SP2013 | Technical
# Friday, 16 November 2012

Speaker : Mirjam Van Olst

With SP2007, web templates required to create a site definitions to be deployed in the 14 hives. Changes to the template needed a change to the site definition.

From SP2010, web templates can be changed afterwards even if sites were already created based on that template. Templates were also saved as a wsp files.

A web template can be Site and Farm scoped and uses the WebTemplate feature element. Only the onet.xml and elements.xml files are required. Site Template and Web Template can now be used interchangeably. They appear as the same to the user, no difference. “Save as a site template” creates a sandboxed solution, is stored in the site collection gallery and can be imported into Visual Studio. But, the import is difficult and it is probably better to create a new site definition. Web Templates are based on a site definition. It does not inherit from its based site definition. Saving a publishing site as a template is not supported.

Some Web template limitations : feature stapling can’t be used, and variations neither (would be the only reason why going for a custom site definition).

Web Template provisioning steps, first creates the URL for the site. Secondly, GLOBAL onet.xml file is provisioned. If site collection scope Web Template, the site collection features will be activated in the order they are declared in the onet.xml. For sub-site scoped Web Template, a check that site collection features are activated has to be done. Then, site scoped features are activated in the order they are defined in the onet.xml. Finally, list instances are created. If a feature needs to modify a list, as no list exist, it can’t be done like this. Therefore, list creation should be done during the feature activation (event receiver).

Web Template requires some properties : BaseTemplateName, BaseTemplateID and BaseConfigurationID. When starting a Web Template, it is recommended to take a copy of the Out-of-the-box onet.xml and to strip it rather than starting from scratch. A Web Template onet.xml can only contain one configuration. Configuration ID has to be 0. Modules element is not supported in onet.xml.

Recommendations : use only enough features and limit the number of features that need to be activated (slowness of site provisioning). Be careful, site scoped features can block sub-site creation.

Two ways to deploy Web Template : Farm solution, or sandboxed solutions. Farm solution way makes the Web Template being available in the whole environment. The last solution will keep the onet.xml and elements.xml files will be stored in the content database. When using Sandboxed solution, it can also be deployed in Office 365. In most of the cases, everything done in a Web Template can be put in a Sandboxed solution. But, make sure that the solution can be removed after a site has been created.

Web Template can be used from code to provision new sites, using the web template feature GUID and name, separated by the sharp sign. It is also a good thing to store the web template and version of the property bag (<PropertyBag> element) of the site.

A webtemp file can be linked to several site definitions.

Apps for SharePoint must be self-contained.

The domain of the App Web is different from the one the user is browsing (Host Web). Creating App Web can be done starting from the App-0 site definition. It is also possible to create the App Web using a custom Web Template. It is deployed in the App itself in a web scoped feature. It has to be defined in the appmanifest.xml file.

Friday, 16 November 2012 01:02:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC12
# Thursday, 15 November 2012

Speakers : Oleg Kofman, Jon Epstein

The goal of a SharePoint governance is to keep both IT and users happy and to set some processes in place. It should involve a broad range of people, from the business (really important to get adoption) to the network people. Legal and compliance becomes also even more important teams, as data and files are going online and licensing concerns.

SLAs should be published, in order to limit the number of escalation and helps explaining the expectations.

So far, there are 3 models : Farm solution (since 2007), sandbox solutions (deprecated) and SP Apps. Sandbox solution, even if deprecated, it is not yet gone. Recommendation is to convert the Sandbox solutions. App is the preferred option for multi-tenant Office 365. It is easy to deploy, maintain and reuse, but there is no server-side code. Even if an App does not have server-side code, it can be an umbrella on top of an other solution having server-side code.

Process to determine if an App can be used : 1st, check if there is already something existing in the Enterprise Catalog and if it can be done without code, just to avoid a reinvent the wheel. Then, check the SP Store or 3rd party vendors in order to have a build vs buy thought. Then, check if TimerJob is needed to be developed or if any server-side code is needed. Will is save time and who will maintain the solution (once the developers have gone) ? The last step is to define who will publish the App in the store.

Different hosting models : SharePoint-hosted, when a user deploy an App a subweb will be created. So, if 10000 users request to install the App, there is potentially as much subweb created ! No server-side code is allowed. Provider-hosted, to host the App on your own infrastructure that can be also completely separated, enabling to use other languages, such as PHP, for App development. Autohosted, where a Windows Azure Web Site and SQL Azure DB will automatically be provisioned when the App is installed.

So far, every developers needed his own SP farm. With the new App model, therefore this is not really required as the developer can stay in a single site.

Publishing an App needs to make a choice between two App Scopes : Web Scope, on site per instance, or Tenant Scope, one site per tenant. This can’t be defined by the developer (no manifest entry). It is important to publish the evaluation criteria for App permission, so that the developers know what is expected and what is allowed in terms of App permissions. High Trust Apps (for On-Prem) require more scrutiny. New Apps versions may also request for different permissions. So, it is important to check, from a governance perspective, what are the permissions are requested and challenge them. Plan for publishing requests SLAs. Someone has to proactively look for the requests and approving them. Plan who will highlights, adds and review (technical) Apps. It is possible to have one App catalog per Web Applications. It is not possible to share an App catalog across Web Applications. Therefore, define whether the Apps should be published in all the catalogs or not. Such process should then be in place.

On the operations side, it is important to monitor the usage, errors and the licensing. It can be done from the Central Admin and Apps monitoring information is stored in the Usage database.

Thursday, 15 November 2012 23:32:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC12

Speaker : Sesha Mani

SharePoint 2013 Server-to-Server (S2S) Out-of-the-box implemented scenarios : SP to Exchange, SP to SP and SP to MTW (Multi-tenant workflow service). OAuth is a standard that enables an application to access user’s resources without prompting for user’s credentials. S2S is a kind of an extension of OAuth 2.0 to allow an application to be high trust between applications. Application Principal is like the user’s principal (user identity), but for applications.

S2S implementation in O365

All S2S applications must exist in MSO-DS (kind of application directory). ACS plays the role of trust broker. When someone connects to SP Online, SP Online makes a call to the ACS to authenticate itself (with a certificate exchange) and asks to talk to Exchange Online and tries to get a token. ACS validates the SP Online token and checks the requested endpoint before issuing a STS token. SP Online augments the user identity information before sending the STS token (composed by the inner token – the basic ACS token, and the outer token – containing the added user’s claims). Exchange Online validates the STS token ensuring it has been issued by the ACS. It also validates that inner-token endpoint is the same as the outer-token endpoint. Then, it ensures that the user has the necessary permissions. Basically, the Application Identity is the inner-token, while the User Identity is the outer-token. Finally, Exchange Online returns the resources to SP Online.

This scenario is also valid for on-premises.

S2S authentication - On-Premise Only

SP hosts the App Management Service, the STS and the User Profile App (UPA) Service. Exchange hosts as well an STS OM. Making a trust between the two STS can simply be done using a PowerShell command (New-SPTrustedSecurityTokenIssuer and New-PartnerApplication). A user connects on SP and wants to do some activity on both SP and Exchange. SP STS issues an STS token containing the outer and inner token. That STS token is sent to Exchange, which checks this token if it accepts delegation. Also, endpoints check is done between the inner and outer tokens information. Exchange checks the user’s permissions before returning the resources to SP. The configuration steps are : STS trust establishment using the PS cmdlets), Permissions for principal and scenario specific settings.

Hybrid Solution

In the Cloud, MSO-DS synchronizes with the ACS and SP Online trusts the ACS which plays the role of the trust broker. On-Prem, same setup as in the previous scenario. In addition, SP STS synchronizes with the SP Online STS. Also, AD synchronizes with MSO-DS. A user makes a query in SP Online through On-Prem SP. SP issues a STS token to connect to SP Online. Request is sent to ACS for validation. Then SP sends the augmented-token to SP Online STS (containing the e-mail address – SMTP - and the UPN). SP Online accepts the token and returns the resources back to On-Prem SP.

S2S not supported topologies

Cross-premise or cross product S2S calls (SP calling Exchange Online), Cross-tenant scenario (Contoso to Fabrikam), S2S call between SP without AD to Exchange or Lync On-Prem, Web apps using Windows-Classic authentication.

Thursday, 15 November 2012 21:58:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC12

Speaker : Eric Shupps

The SP2013 model still has sites, content, services API, but has now Apps with package, HTML/JS (or other technology) and data. In order for an App to be authorized in SharePoint, it uses OAuth.

Autohosted App model enables SharePoint to automatically deploy the package on Azure. Be careful of the limitations of this model. In Visual Studio, when creating an App for SharePoint, it actually creates two project : the App project and the SharePoint project. The App project is the entity that will be deployed. You can only deploy using Visual Studio in a developer template SharePoint site. A ToketHelper.cs file is automatically created in the VS solution to deal with all the token-related operations and authorization. It is used to get the client context.

An App can consume SQL data using WCF/JSON/XML, SharePoint data using OAuth/REST/CSOM or Office data using HTML/XML.

He wrote a WCF “proxy” that interfaces with the SQL database and serialize/deserialize the data in JSON for consumption from Javascript.

Javascript is the language to use when dealing with Office data.

To get the data, App Web REST API has to be called. Be sure to deploy a SharePoint artifact, otherwise no App Web will be created. It is easy to add the Chrome look-and-feel, by using a bit of Javascript.

SQL Azure database tables must have a primary key before deploying it. Azure Virtual Machines have 5 different sizes : XS/S/M/L/ML . They have persistent storage, virtual networking. Web Role is an Azure VM. It can be shared or reserved, 3rd party assemblies can be deployed, TFS/GIT//Web Deploy are also available. Azure Web Sites are free, only contains default asemblies (i.e. WIF is not there and can’t be deployed), TFS/GIT/Web Deploy is also available.

SharePoint Apps only uses HTTPS.

Office 365 Apps only uses HTTPS and needs a unique App ID. In order to “F5” deploy, it has to be a developer template site. Publishing to the Office Store requires an App & Package validation. The Office Store is public, whereas the App catalog is private, and therefore does not require validation or licensing.

Thursday, 15 November 2012 06:02:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC12

Speakers : Mike Ammerlaan, Neil McCarthy

To integrate Yammer in an App, “Yammer Embed” is what to use. It provides support for profile information and also communicates with the Yammer platform. When conversation is started in your application, the embed will also post it on the Yammer network. Conversation can be at the site level or at the item or document level, which then uses the Open Graph protocol.

Yammer is exposed as a set of protocols and APIs that allow to build any kind of application (example of an ASP.NET application). Documentation can be found at http://developer.yammer.com .

An App needs a key (ClientID) and can be proposed in the company’s App Store. From the Yammer web application, your application has to be registered, then keys and tokens are delivered. Some other information have to be filled, such as the website and URIs.

No server-side code is needed. Using server-side code is more difficult than using JavaScript. It can be done using Javascript (example shown). A single reference to the yam.js is needed in the HTML. In that reference, that is where the ClientID is specified. OAuth is used to authorize the application.

REST APIs supports Messages (be careful of not flooding the feeds of users), Groups, Users, Suggestions, Autocomplete, Search and Networks.

Activity story is composed by an actor, an object and an activity (Robert Red has Updated this File). Activity stories appear in the Activity feed (“Recent Activity). It contains a URL (i.e. to a document), a Type, an Image, a Title, the Name and the e-mail address of the Actor, an Action (i.e. Update. It also supports custom actions), and a Message.

From SharePoint, the Remote Event Receivers can be used to publish activities happening on SharePoint to Yammer. But, in order to work, the OAuth token must be cached before to be reused when calling the Yammer REST APIs. For example, for each update on a document or list item, the Yammer APIs can be called.

User-enabled and Admin-enabled Apps need the non-free version of Yammer. Admin-enabled Apps are useful for posting information on behalf of users (impersonation).

Global Yammer Apps Development steps : Register the app, development, list the App in the network’s directory, describe the App, submit the app to Yammer, list in every network’s directory. The last 3 steps are only required for Global Apps.

To export Yammer data, there is a Data Export API which generates a zipped file containing the data.

Thursday, 15 November 2012 04:11:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC12
# Wednesday, 14 November 2012

Speaker : Spencer Harbar

Almost all the features of SharePoint have to deal with Identity management and the User Profiles. Identity Management is only 10% about technology. One of the primary consideration when talking about Identity Management is “who owns” the data. The other is the quality of the data. Is the data clean or up to date. Another important consideration is, for example, the Active Directory data quality. Sometimes as well, data is stores in lagacy or LOB systems. Also, access to Identity Management data has to be controlled, but for external systems, the question of authorization and authentication comes in the game.

It is really important to work closely with the DS admins as they are at the center of such project. Communication is therefore key. Also, several permissions are needed for the synchronization.

An issue so far was a misunderstanding of the UPA architecture and its features and design constraints are driving the deployment options. 4 key areas that need to be careful with : Security, Privacy, Policy, Operations. Several services are in the scope of UPA : SQL, Distributed Cache, Search, Managed Metadata, Business Data Connectivity.

The goals of the new Profile Sync in SP2013 are performance improvements and a wider compatibility. As an example, for a directory with more and 100’000 users or groups can be imported in 60 hours instead of 2 weeks previously.

Several synchronization “modes” : AD import, UP Sync and custom code synchronization.

Can filter on users and groups (object selection) using LDAP queries (inclusion based, UPS has exclusion based filters). Requires one connection per domain. Support shadow accounts and it is possible to do property mapping as well as account mappings between AD and FBA or others. Replication of AD changes is still needed, but improves the import. There is no cross forest Contact resolution, mapping to SP system properties is not supported. Embedding profile with data from BDC is not possible. Mapping properties with multi-values is not possible. When an AD configuration is changing (schema), a full import is required as well as a purge after the import. The full import can’t be scheduled. AD connections are stored in the Profile DB, whereas the UPS stores them in the Sync DB. Mappings and filters are not moved.

Provisioning UPA and UPS is done in the Manage Service Applications and with PowerShell, but with PS, there is still the default schema issue. Two workarounds : logon the machine using the Farm account, or to change manually the data in the database (not supported).

Some profile properties are automatically in the taxonomy when provisioning the Managed Metadata Service. Indeed, MMS is leveraged by the User Profile import. In order to start the User Profile Service Application, the Farm account has to be put in the Local Admins group. Therefore a warning, complaining that the Farm account is in the admin group, will be displayed in the SP Health analyzer. The recommendation is to enable Netbios if the FQDN and Netbios domain name don’t match, right after the UPSA provisioning.

Planning is the key to success. Remember that if data are rubbish, it will not be better once imported. Health of the AD is very important.

The web front-end servers are still making direct TDS calls to the SQL Server.

Wednesday, 14 November 2012 21:48:00 (GMT Standard Time, UTC+00:00)  #    Comments [0] -
SP2013 | SPC12
# Thursday, 25 October 2012

<Caution>This post is based on the SharePoint 2013 Consumer Preview. Thus, behavior described here may change in the next release of the platform</Caution>

When a site collection is created and if the “SharePoint Server Publishing Infrastructure” site collection feature is activated, SharePoint 2013 automatically creates a term group in the term store attached to the web application. This term group can then be used for the navigation. The structure below is therefore created :

newsitecollectiontermgroup

But, when you delete the site collection, the term group and its structure underneath will remain, which can in one sense be understood. The drawback of this is that once the site collection is deleted, you can’t see it in the Term Store Management Tool from the Central Admin site. It also means that if you create a site collection with the same name as the previous one, this term group will be suffixed by a number, like “-1” or “-2”. This can be a bit dirty.

Two possibilities to avoid this situation :

  1. Delete the Term Sets and Term Group from the site collection before its deletion
  2. Use PowerShell to delete these ghost Term Groups if it is too late.

For the second solution, the following script can be used :

$Site = get-SPSite "$url"
$ServiceName = "Managed Metadata Service"
$session = Get-SPTaxonomySession -Site $Site
$termStore = $session.TermStores[$ServiceName]
$termGroup = $termStore.Groups[$termGroupName]

"About to delete Term group in"
"URL : " + $url
"Group : " $termGroup

 
if ($termStore -ne $null)
{
 
    if ($termGroup -ne $null)
    {
        $termGroup.TermSets | ForEach {
            "deleting " + $_.Name
            $_.Delete()
            $termStore.CommitAll()
               $_.Name + "deleted"
        }
        $termGroup.Delete()
        $termStore.CommitAll()
    }
}

$Site represents the target site collection URL, and $termGroupName is the name of the Term Group you want to delete

If you want to check what are the existing Term Groups in your store, you can use this script :

$Site = get-SPSite "$url"
$ServiceName = "Managed Metadata Service"
$session = Get-SPTaxonomySession -Site $Site
$termStore = $session.TermStores[$ServiceName]

"Groups in"
"URL : " + $url
 
if ($termStore -ne $null)
{
    $termStore.Groups | ForEach    {
        $_.Name
    }
}

Thursday, 25 October 2012 01:18:00 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
SP2013
# Friday, 19 October 2012

<Caution>This post is based on the SharePoint 2013 Consumer Preview. Thus, behavior described here may change in the next release of the platform</Caution>

SharePoint 2013 is coming with a very nice feature which is the Managed Metadata Navigation, allowing us to define the navigation completely separated from the content or the physical pages. But, it has to be used with some attentions.

I found what could be interpreted as a bug in the SharePoint 2013 Consumer Preview and its new Managed Metadata Navigation. When a term has the same name as a subsite, the navigation to the page targeted by the term itself is fine, but for all the sub-terms, you get a “Page not found” message. The pictures below show that navigating to the “About” term is well going to the default page of the “About” subsite (i.e. /about/Pages/default.aspx), but when selecting an “About” sub-term, it does not display the /about/companyinformation/Pages/default.aspx page :

about_ok

sub_about_notfound

The structure of the content is the following :

sitestructure

And the metadata structure :

metadata

Even if, for the “About” term, I define a custom target page and explicitly specify /about/Pages/default.aspx , I still get the “Page not found” error.

The only way to solve this problem is to change the automatically assigned Friendly URL, AND, to change also the Target Page.

In fact, this is because there is a naming collision. /about, which is the Friendly URL associated with the “About” term is also the URL of the “About” sub-site. This is why the “About” term itself works.

It also works for the sub-term “About” / “Locations”, because the associated Friendly URL is /about/locations, which is also the name of the sub-site, leading to its default page. Unfortunately, the “Company Information” term is associated with the /about/company-information Friendly URL, which does not correspond to any sub-element of the /about site, which seems to take the precedence over the Friendly URL resolution and its Target Page. That explains that even if you specify a valid Target Page for the “Company Information” term, “Page not found” will still be displayed.

So, as written above, by changing the Friendly URL automatically associated to the “About” term to “aboutus” would only partially solve the issue. Indeed, /aboutus would remove the collision with the sub-site name, but would lead to an unexisting page. Changing also the Target Page, then, would completely work around this.

As a conclusion, be careful with the name you give to your site and the Friendly URL associated with the terms.

Friday, 19 October 2012 20:28:00 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
SP2013
# Wednesday, 01 August 2012

<Caution>This post is based on the SharePoint 2013 Consumer Preview. Thus, behavior described here may change in the next release of the platform</Caution>

LowMemoryError

While working on a SharePoint 2013, after several hours of uptime, the user interface showed me an error message when I wanted to save an item in a list : “The server was unable to save the form at this time. Please try again.” Looking at different possible causes, I found that the available memory was drastically low. Indeed, on this 8GB RAM front-end server virtual machine (the database is hosted on a second VM), only few megs were still available.

LowMemoryNoderunner

Then, in the “Processes” tab, I saw that the noderunner.exe process was eating a lot of memory. A quick tour on Google and I found this Marc Molenaar’s blog post about the noderunner.exe process.

I decided to give it a try and, as suggested in the post, I restarted the “SharePoint Search Host Controller” service. Same observation as Marc, the service took a long time to restart and a huge part of the memory was released. The good thing is that at the same time it solved my item-saving issue. The error disappeared.

To be sure this service restart was “solving” the issue, I worked again several hours, playing also with the search and when the VM got short in memory, the same error message was shown to me again.

Another side effect of this low-memory case occurs when browsing the Managed Metadata tree. I suddenly received constantly an “Unexpected response from server. The status code of response is ‘500’. The status text of response is ‘System.ServiceModel.ServiceActivationException’. Unfortunately, it was impossible to get out of this message loop, and the only way to get rid of it was to kill the Internet Explorer application.

Error500

Wednesday, 01 August 2012 22:01:23 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
SP2013
# Wednesday, 18 July 2012

Since couple of weeks, the Microsoft world was boiling and the recent announcements raised the level of excitement among the partners, developers or users. For the most recent events, it started by the Yammer’s acquisition by Microsoft, followed by the Windows Phone 8 announcement. Then, Windows 8 and the related devices, accompanied with the Surface tablet. And now, what many people were expecting since a while, the Office 2013 wave, including SharePoint 2013. Here, it is not only a wave like we had with the “Office 14 wave”, but a tidal wave should we say, with the release of the “Consumer Preview” of the products, introducing the “Modern Office” concept.

During the July 16th presentation, SharePoint 2013 was slightly mentioned, but the complete set of Office 2013 Consumer Preview was released, and following the #officepreview twits was amazing. At the same time, the NDA by which the closest communities were tied (like the MVPs) was lifted and a massive amount of information was released.

The install

So, during the event, I downloaded the SharePoint Server 2013 to start an install in a VM (2.1 GB), with 4 CPUs and 8GB of RAM. The first surprise came when I started the setup program, which directly had an “Install Prerequisites”, saving us from downloading the pre-requisites individually (if I remember well, the first versions of SharePoint 2010 didn’t have such shortcut). And fortunately, because the list of pre-requisites is quite big. Once the the pre-requisites were installed, the setup itself took around 20 minutes to install the beast. The configuration wizard is well-known too, as it looks like (if not the same) the one of 2010. Finally, it is the post-install wizard that starts and displays the first bits of the new SharePoint 2013 user interface.

defaultHomePagenewthemeHomePage

Quick Round

Once the install is done, the first site created, the new user interface using the Metro style is presented. To be honest, the default theme is not the one that is the most successful; it is really difficult to see what is part of the header, what is part of the current navigation and the content. So, the first operation I did is to go in the former “Site Action” menu which is now in the right of the top bar, in the “Site Settings” and to “Change the look”. Some themes are better than others to distinguish between the content and the navigation. In addition, for each theme (or look ?), it is possible to select a different color scheme, to change or remove the background image or the fonts used.

By default, the ribbon is hidden and to make it appearing, you have to click on one of the menu header. The ribbon will thus appear, “sliding” from the top of the page header. The current navigation didn’t change much, but a nice feature is the ability to modify the links of both the global navigation and the current navigation, using the “Edit Links” link, pretty convenient as it does not force you to “Top Link Bar” or the “Quick Launch” settings.

The user menu is quite simple now. Exit the “Sign in as a different user” or other items, in SharePoint 2013, only “About Me” leading you to your personal page and the social part of SharePoint and “Sign Out”. In the same area, the “Share” button allows you to invite others and assign permissions to them on the current page, “Follow” to have the current page appearing in your feeds, “Sync” to synchronize locally the content of your site, “Edit” which is a shortcut to the Page => Edit action, or the surprising “Focus on Content” button. This feature toggles between a view without any navigation and having only the content area on the screen, and the standard view of the SharePoint page. Why not….

ViewProperties

But, how to create a document library ? If you are not familiar (this means the first 15 minutes), you will desperately look for a “Create” button somewhere. Rather than that, going in the “Site Contents” enables you to “add an app”, which will proposes you the different types of lists or libraries you can create. Thus, I created a first library and uploaded a file in that library, which does not differ from the previous version of SharePoint. Where I am surprised again is regarding the usability of some features. For example, viewing the properties of a file, where before it was quick and needed only one click, in SharePoint 2013 it requires 2 clicks, each time on the “…” button. For such functionality, I would expect to have it directly in the context menu of the item. Let’s see if it stays like this in the final release, but maybe it worth some improvements in some cases.

The performance

performances

With SharePoint 2010, installing it on a VM with 4 CPUs and 8GB of RAM was quite ok for trying some things. Having SQL Server in the same VM was not that bad. Sometimes slow, but not that bad. Here with SharePoint Server 2013, I decided to install it on the same kind of machine, and after a bit of time, it became really slow. I connected the server to check the performances and, even if the CPUs were only used at several percentages, it was radically different with the memory. It is simple, less than 1GB was “free”, the main memory eaters were SQL Server and the Distributed Cache Service (AppFabric). This demonstrates that another level of requirements. Indeed, checking on the web, I found this article from Bjorn Furuknap and later, the hardware requirements from Microsoft, 24GB of RAM (yes, 24 !!) is recommended, this new release of SharePoint has a price… It is also true that during the install and the configuration, I selected all the services, this for sure plays a role. But, Visual Studio is not yet installed and I am wondering what kind of setup a developer will need to have a decent development environment.

This concludes my very first post for SharePoint 2013, and will follow other articles describing either the (new) features of the platform or what is new in terms of architecture and development on a regular basis. So, as SharePoint 2013, I am “working on it” Smile and thanks to stay tuned.

Wednesday, 18 July 2012 22:37:07 (GMT Daylight Time, UTC+01:00)  #    Comments [0] -
SP2013
Ranked #1 as
French-speaking SharePoint
Community Influencer 2013
Currently Reading :
I was there :
I was there :
I was exhibiting at :
I was there :
I was a speaker at :
I was a speaker at :
I was a speaker at
(January 2013 session):
I was a speaker at :
I was a speaker at :
United Nations (UN) SharePoint Event 2011
I was a speaker at :
I was there !
I was there !
I was there !
I was there !
Archive
<2017 June>
SunMonTueWedThuFriSat
28293031123
45678910
11121314151617
18192021222324
2526272829301
2345678
Listed On :
Blogroll
[Feed] Weblogger.ch
[Feed] David Chappell :: Weblog
[Feed] RockyH - Security First!
[Feed] The Project Management Podcast™
[Feed] Lunch over IP
[Feed] Intellectual Hedonism
[Feed] Upgrade to Biztalk 2006
[Feed] BizTalk Server Team Blog
[Feed] Eric Cote
[Feed] Mario Cardinal
[Feed] BizTalk Server Performance
[Feed] Julia Lerman Blog - Don't Be Iffy...
[Feed] Dotnet Fox
[Feed] Joel on Software
[Feed] Kevin Lam's WebLog
[Feed] BizTalk 101 - Back to Basics
[Feed] Peter Himschoot's blog
[Feed] Guy Barrette
[Feed] Mark Harrison
[Feed] Chanian, Raj
[Feed] A BizTalk Enthusiast
[Feed] Kevin B Smith's WebLog
[Feed] JABLOG
[Feed] BizTalk Core Engine's WebLog
[Feed] Robert Rijsdijk's BizTalk Server Weblogs
[Feed] Bryant Likes's Blog
[Feed] {CaptainK} - a.k.a Suresh Kumar
[Feed] CaPo's .NET and Enterprise Servers adventures - by Carlo Poli
[Feed] Charles Young
[Feed] Christoph .NET
[Feed] ComputerZen.com - Scott Hanselman's Weblog
[Feed] Console.WriteLine("Hello World");
[Feed] Darrell Norton's Blog
[Feed] Darren Jefford
[Feed] Dot Net Dunk
[Feed] Gilles' WebLog
[Feed] Jan Tielens' Bloggings
[Feed] Lamont Harrington's Blog
[Feed] Lamont Harrington's Blog
[Feed] Luke Hutteman's Weblog
[Feed] Matt Meleski's .Net Blog - The ABC's of .NET
[Feed] Michael Platt's WebLog
[Feed] Mike Holdorf's Blog
[Feed] Mike Taulty's Weblog
[Feed] Neopoleon.com
[Feed] Owen Allen
[Feed] Scott Woodgate's E-Business Outbursts
[Feed] Stephen W. Thomas
[Feed] The Arch Hacker's BizTalk Blog
[Feed] The BizTalk Visionary - BizTalk 2004, SOA and on
[Feed] Trace of Thought (Scott Colestock)
About the author/Disclaimer

Disclaimer
The opinions expressed herein are my own personal opinions and do not represent my employer's view in any way.

© Copyright 2017
Yves Peneveyre
Sign In
Statistics
Total Posts: 286
This Year: 0
This Month: 0
This Week: 0
Comments: 18
Themes
Pick a theme:
All Content © 2017, Yves Peneveyre
DasBlog theme 'Business' created by Christoph De Baene (delarou)