Title : SharePoint 2013 – Planet of the Apps 2.0
Author : Sahil Malik
SharePoint 2013 comes with a new development model, based on Apps. This book goes through the different kind of Apps, giving examples of each of them, explaining what Apps are and building the next examples on top of the previous App. It is an introductory book and it is not intended to be an in-depth one, going in all the details of Apps development. This is understandable looking at the topic and how vast it is.
Book Review :
The very good thing is that the book is written in such a way that you read it fast. It is not a 600 pages paving stone and to give an overview of SharePoint 2013 Apps, it is perfect. It starts with a really simple App, a SharePoint hosted, and going further, adds complexities and ends with Server-to-Server type of App, talking about permissions, Azure ACS and many aspects that a developer starting putting his hands in Apps development should know. That said, as some subjects are complex, some parts of the book should be read carefully and some time should be spent to really understand some notions before going forward to the next example or chapter. Additionally, the writing style is nice and Sahil uses a good humor to help digesting some topics.
For me, this is the book to start with (well, at the same time, I haven’t read many of Apps development book so far; it is coming…), giving the first steps to develop SharePoint 2013 Apps. It is short and long enough to get a nice understanding, and finally, it is fun. And remember, “Hash is legal in Amsterdam (almost)” (ref. to the first version of the book).
After the setup of a new SharePoint 2013 environment, I started testing it by creating a really simple SharePoint-Hosted App, like a basic “Hello World” App. For this environment, I am using a Visual Studio 2012 development machine remotely from the SharePoint 2013 box. In order to test this very simplistic application, I just pressed F5 to launch the VS debugger and I landed on the SharePoint 2013 page, and was able to see my App in the quick launch menu. But when I clicked on the link, I got a nice “The resource cannot be found” (404), as show in the picture at the beginning of this post.
I checked several times the SharePoint 2013 App settings, such as the “App domain” URL and the “App prefix”, and they were correct. I also checked the DNS settings and the bindings to the IIS site and everything was perfect.
During my troubleshooting, I saw that deploying manually the App was working perfectly. It means that there was a difference between a deployment using VS and a manual one, or, the execution of the App.
Googling a bit, I found this post on the Microsoft forums : http://social.msdn.microsoft.com/Forums/en-US/appsforsharepoint/thread/188d78d8-8c35-46df-8770-695d1258ad18/
In this long thread, people are mentioning that they were adding a colon to the loopback IPv6 address that VS is adding in the hosts file (located in %Windir%\System32\drivers\etc ), making the address ::1 invalid. This indeed worked for me, but raised another question. VS was adding two IP addresses for the same host :
Clearly, ::1 is the IPv6 equivalent of 127.0.0.1, but my App was not running locally, but on the 10.180.128.195 server. So, why the IPv6 was wrong and not equal to my SharePoint 2013 IPv6 address ?
While in debug mode, I replaced the ::1 address by the real IPv6 address of my SharePoint 2013. And………it worked like a charm.
So far, my theories are (and be cautious, because they need to be confirmed), coming from many different tests I did :
- By adding a colon to the loopback IPv6 address, it makes it invalid (rfc5952). This causes my development machine falling back to IPv4 to connect the server.
- The reason why VS adds the loopback IPv6 instead of the correct one is likely because it cannot resolve the host name with IPv6. And rather than not adding any entry, it adds the ::1 address.
As also written in the MSDN forum, to avoid having to every time manually change the hosts entries while in a debugging session, disabling IPv6 is a good way, and most probably not an issue for most of the people.
Last November, during the SharePoint Conference 2012, I bought the Surface RT and so far, I’m really happy with it (and don’t want to change for a Pro version).
But, yesterday, when I took it to read my favorites blogs, I saw that there was no WiFi connection. I was still able to see the dozens of other non-hidden WiFi of my neighbors, but, none of the 3 SSIDs (2 of them are visible, but protected) that I use. Switching on and off the flight mode didn’t help, so I tried the option to refresh the Surface, and after about 30 minutes, still nothing.
The day after, I took it and tried to connect it to another WiFi network and there was no issue. I took this opportunity to download and install all the available updates, in case one of the could fix a potential bug (knowing that many people have issues with WiFi connectivity with the Surface). Back at home, no luck. Still not able to connect my hidden SSID or the two others.
Trying again my search engine, I found this superuser.com post, about an issue connecting a Windows 7 computer on a WiFi router. One of the answer was to change the WiFi channel used by the router. Then, I refined my searches, and found this interesting thread on the XDA developers forum. Here, it says that US Surface RT can’t connect to channels 12 & 13.
Looking at my router’s WiFi settings, I saw that it was in “Automatic Mode” and setting it to a specific channel other than 12 and 13 made my Surface finding the network again. Honestly, I didn’t checked if it was confirmed and that US Surface RT really can’t connect to channels 12 and 13, but at least, it works for me, and maybe for others as well.
So, if, like me, you bought your Surface RT in the US, check your router WiFi settings if you loose your wireless network and try to change the channel used…
After two weeks of vacations in Las Vegas and around, after the SharePoint Conference (@SPConf, #SPC12), I had time to think about what I found good and less good at this event. Even though it was not mandatory to let that amount of time to write a wrap-up, doing a step back from the conference was not bad in order to make the things more objective. So, in this post I will review the different aspects of the event, from the location, through the content.
When, in Los Angeles last year, it was announced that in 2012 the SharePoint Conference would take place in Las Vegas, I immediately thought to register. And for mainly two reasons. First, I really love Las Vegas, and, second, it meant that something great would happen. Even if most of the attendance knew that a new release would come, not that many people thought that SharePoint 2013 would have came out in such a short period of time. Since the SPC09, I was convinced that one of the reason of going back to Vegas was because of a new version of the products (a bit of intuition as well). The Mandalay Bay is a great place for such huge event, even if I’m not sure it is the only Convention Center able to contain that many rooms for sessions and 10’000 people. Moreover, the number of places where parties or dinners can be made is unbelievable. Taking a room in the same location (either the Mandalay Bay or TheHotel) is also a good decision, as going from the room to the Convention Center takes already 15 minutes, staying in another hotel would take really more time (even from the Luxor, count 10 more minutes). So, an advice for the people that wonder if it worth being in the same hotel as the conference, the answer is : YES.
In 2011, it was the first real time I used Twitter and I was blown. For a long time I was wondering about the reason or the purpose of such tool. Basically, is that really useful ? A conference is a really good example of usage of this tool. The communication medium was perfect to propagate information, room changes or other important news. Moreover, communication to and from the conference organizer (@SPConf) was radically easier. Moreover, it fosters people to interact with others and exchange experience or news. On the real life side, this year was amazing. I saw and was involved in many discussions, with people I’ve never met before, or people I exchanged with on Twitter or only via e-mail. One could see people really discussing and making new friends. The Community Hub, setup by Joel Oleson (@joeloleson) and Mark Miller (@EUSP) was a great success. A lot of people were passing by the booth to meet and it was very dynamic, showing that the SharePoint community is not only a word, but is also very active. Here, my advice to people that would be new at a (SharePoint) Conference would be : don’t be shy, engage with others and take a good bunch of business cards.
With the partner events and the Microsoft Tuesday event with Bon Jovi, there was many occasions to party all along the week. Passport party, red party or green party participated in the lot of fun that people were able to get. They were great events. Nevertheless, it goes fast and being able to meet each of your friends during a week is definitely hard. Goes too fast. And, also, don’t go to sleep too late, you would pay it off during the next days of the conference. A special mention also to Erica Toelle (@ericatoelle) and her #SPCSuite idea, that was a lot of fun.
Once again, the organization was great. Even if there was 10’000 attendees, the registration went smooth, almost with no waiting time. During the course of the week, I haven’t seen big problems. If I had to give an area to improve, it would probably be the room’s session allocation. Several times, sessions were packed and it was no longer possible to enter the room. An idea to avoid these situations in the future is maybe to enforce the registration to the sessions. On the other side, I was really surprised to see the lack of tolerance of some people. Indeed, I could see once a blue-shirt lady explaining hundreds of times that the session was full and that no more people could enter. Suddenly, one or two guys insulting her so much that she had to leave, shocked. I don’t understand this kind of behavior from an attendee. It is like shooting to the messenger. I would also like to thank all of these blue-shirts people for their guiding information and their kindness during the week.
Another point to improve is the break between the sessions, that was not long enough, at least from my own point of view. 15 minutes to go from the lower-level up to the 3rd level, with some many people in the corridors, it is too short. I am open to have sessions starting earlier in the morning and finishing later in the evening, making the breaks longer, as I know that many people are not attending the same kind of session (business, developer or IT) the whole day, but rather switch from one to another. Splitting the types of sessions by floor was very good though.
Organizing the catering for 10’000 people is really serious and can’t be improvised. Here as well, thumb up for the organization. I’ve never waited more than several minutes to reach the buffet and then to find a seat. Again, we should not be afraid of joining a table with already many people around. It is a very good way to make new friends or to network a bit. For non-american people, the food can be......different, but it was ok.
Seeing my colleague coming back from the Build Windows conference with a Surface and a Lumia 920 got me mad. Ok, not that much, but still. Of course, we could not imagine that Microsoft would leak 10’000 Surface to the attendees. Instead, we got a backpack and a bottle. I can’t count the number of bottle I got from conferences. The first day of the conference, I immediately thought that a nice swag would have been a 3G SIM card for smartphone. At least, people would have been able to twit or get access to the internet (see my last paragraph).
Since the keynote, it was obvious that 3 main topics that would be addressed during the week : Social Networking, the Apps and the Cloud. And we were not disappointed. Almost. In reality, when discussing with colleagues and friends, it appeared that several sessions were similar. Moreover, there was some lack of in-depth sessions, most likely because of the recent release of the platform. I didn’t attend sessions that were really bad, but watching few videos and also gathering some feedbacks, it appeared that many presentations were not enough prepared or rehearsed to avoid bugs or issues on stage. Another thing : when promoting the “all-in-the-Cloud” strategy, it works better when there is a network, but I will come back later to this specific point. Indeed, many demos failed because of the connectivity. I was also lucky to mostly attend great sessions held by top speakers. When you attend a session with Andrew Connell or Eric Shupps (@eshupps), it is a guarantee that you will have good time. But according to some other attendees, some of them were not that lucky. On the other side, it is true that in some occasion, I attended sessions where the title was not really aligned with its content. The strategy I adopted was to mainly select the sessions according to its presenter. My favorite presentation, because of its originality and also because it was really spectacular, was without doubt the “Zero to Live in 60 minutes using SharePoint 2013 Publishing”, with Andrew Connell, Daniel Kogan and 4 others Microsoft Program Managers.
Wifi (because it deserves its own section)
Finally, and even if it was “heard loud and clear” by the organizer, the wifi connectivity was not even bad, but awful. I understand, ensuring the wifi for 10’000 people is not that easy, but there was already a warning in Los Angeles where attendees were complaining that the wifi was not reliable. Also, nowadays, both technology and people are able to support such a big number of connections. At the end, 2 days without connectivity is simply not acceptable. Sorry. Moreover, the last day of the conference, connectivity was also lost for the exhibitors. Again, when promoting the Cloud, there is a bit of an issue.
Last but not least, I would also like to thank Dave Coleman (@davecoleman146) for offering his platform for blogging to some people like me, and was happy to meet Dave and discuss a bit during the conference. I hope to see again the people I met or missed during the conference in the course of next year or at the next SharePoint Conference (not yet announced).
Definitely, the SharePoint Conference, is THE conference to attend when working with SharePoint.
Speaker : Mirjam Van Olst
With SP2007, web templates required to create a site definitions to be deployed in the 14 hives. Changes to the template needed a change to the site definition.
From SP2010, web templates can be changed afterwards even if sites were already created based on that template. Templates were also saved as a wsp files.
A web template can be Site and Farm scoped and uses the WebTemplate feature element. Only the onet.xml and elements.xml files are required. Site Template and Web Template can now be used interchangeably. They appear as the same to the user, no difference. “Save as a site template” creates a sandboxed solution, is stored in the site collection gallery and can be imported into Visual Studio. But, the import is difficult and it is probably better to create a new site definition. Web Templates are based on a site definition. It does not inherit from its based site definition. Saving a publishing site as a template is not supported.
Some Web template limitations : feature stapling can’t be used, and variations neither (would be the only reason why going for a custom site definition).
Web Template provisioning steps, first creates the URL for the site. Secondly, GLOBAL onet.xml file is provisioned. If site collection scope Web Template, the site collection features will be activated in the order they are declared in the onet.xml. For sub-site scoped Web Template, a check that site collection features are activated has to be done. Then, site scoped features are activated in the order they are defined in the onet.xml. Finally, list instances are created. If a feature needs to modify a list, as no list exist, it can’t be done like this. Therefore, list creation should be done during the feature activation (event receiver).
Web Template requires some properties : BaseTemplateName, BaseTemplateID and BaseConfigurationID. When starting a Web Template, it is recommended to take a copy of the Out-of-the-box onet.xml and to strip it rather than starting from scratch. A Web Template onet.xml can only contain one configuration. Configuration ID has to be 0. Modules element is not supported in onet.xml.
Recommendations : use only enough features and limit the number of features that need to be activated (slowness of site provisioning). Be careful, site scoped features can block sub-site creation.
Two ways to deploy Web Template : Farm solution, or sandboxed solutions. Farm solution way makes the Web Template being available in the whole environment. The last solution will keep the onet.xml and elements.xml files will be stored in the content database. When using Sandboxed solution, it can also be deployed in Office 365. In most of the cases, everything done in a Web Template can be put in a Sandboxed solution. But, make sure that the solution can be removed after a site has been created.
Web Template can be used from code to provision new sites, using the web template feature GUID and name, separated by the sharp sign. It is also a good thing to store the web template and version of the property bag (<PropertyBag> element) of the site.
A webtemp file can be linked to several site definitions.
Apps for SharePoint must be self-contained.
The domain of the App Web is different from the one the user is browsing (Host Web). Creating App Web can be done starting from the App-0 site definition. It is also possible to create the App Web using a custom Web Template. It is deployed in the App itself in a web scoped feature. It has to be defined in the appmanifest.xml file.
Speakers : Oleg Kofman, Jon Epstein
The goal of a SharePoint governance is to keep both IT and users happy and to set some processes in place. It should involve a broad range of people, from the business (really important to get adoption) to the network people. Legal and compliance becomes also even more important teams, as data and files are going online and licensing concerns.
SLAs should be published, in order to limit the number of escalation and helps explaining the expectations.
So far, there are 3 models : Farm solution (since 2007), sandbox solutions (deprecated) and SP Apps. Sandbox solution, even if deprecated, it is not yet gone. Recommendation is to convert the Sandbox solutions. App is the preferred option for multi-tenant Office 365. It is easy to deploy, maintain and reuse, but there is no server-side code. Even if an App does not have server-side code, it can be an umbrella on top of an other solution having server-side code.
Process to determine if an App can be used : 1st, check if there is already something existing in the Enterprise Catalog and if it can be done without code, just to avoid a reinvent the wheel. Then, check the SP Store or 3rd party vendors in order to have a build vs buy thought. Then, check if TimerJob is needed to be developed or if any server-side code is needed. Will is save time and who will maintain the solution (once the developers have gone) ? The last step is to define who will publish the App in the store.
Different hosting models : SharePoint-hosted, when a user deploy an App a subweb will be created. So, if 10000 users request to install the App, there is potentially as much subweb created ! No server-side code is allowed. Provider-hosted, to host the App on your own infrastructure that can be also completely separated, enabling to use other languages, such as PHP, for App development. Autohosted, where a Windows Azure Web Site and SQL Azure DB will automatically be provisioned when the App is installed.
So far, every developers needed his own SP farm. With the new App model, therefore this is not really required as the developer can stay in a single site.
Publishing an App needs to make a choice between two App Scopes : Web Scope, on site per instance, or Tenant Scope, one site per tenant. This can’t be defined by the developer (no manifest entry). It is important to publish the evaluation criteria for App permission, so that the developers know what is expected and what is allowed in terms of App permissions. High Trust Apps (for On-Prem) require more scrutiny. New Apps versions may also request for different permissions. So, it is important to check, from a governance perspective, what are the permissions are requested and challenge them. Plan for publishing requests SLAs. Someone has to proactively look for the requests and approving them. Plan who will highlights, adds and review (technical) Apps. It is possible to have one App catalog per Web Applications. It is not possible to share an App catalog across Web Applications. Therefore, define whether the Apps should be published in all the catalogs or not. Such process should then be in place.
On the operations side, it is important to monitor the usage, errors and the licensing. It can be done from the Central Admin and Apps monitoring information is stored in the Usage database.
Speaker : Sesha Mani
SharePoint 2013 Server-to-Server (S2S) Out-of-the-box implemented scenarios : SP to Exchange, SP to SP and SP to MTW (Multi-tenant workflow service). OAuth is a standard that enables an application to access user’s resources without prompting for user’s credentials. S2S is a kind of an extension of OAuth 2.0 to allow an application to be high trust between applications. Application Principal is like the user’s principal (user identity), but for applications.
S2S implementation in O365
All S2S applications must exist in MSO-DS (kind of application directory). ACS plays the role of trust broker. When someone connects to SP Online, SP Online makes a call to the ACS to authenticate itself (with a certificate exchange) and asks to talk to Exchange Online and tries to get a token. ACS validates the SP Online token and checks the requested endpoint before issuing a STS token. SP Online augments the user identity information before sending the STS token (composed by the inner token – the basic ACS token, and the outer token – containing the added user’s claims). Exchange Online validates the STS token ensuring it has been issued by the ACS. It also validates that inner-token endpoint is the same as the outer-token endpoint. Then, it ensures that the user has the necessary permissions. Basically, the Application Identity is the inner-token, while the User Identity is the outer-token. Finally, Exchange Online returns the resources to SP Online.
This scenario is also valid for on-premises.
S2S authentication - On-Premise Only
SP hosts the App Management Service, the STS and the User Profile App (UPA) Service. Exchange hosts as well an STS OM. Making a trust between the two STS can simply be done using a PowerShell command (New-SPTrustedSecurityTokenIssuer and New-PartnerApplication). A user connects on SP and wants to do some activity on both SP and Exchange. SP STS issues an STS token containing the outer and inner token. That STS token is sent to Exchange, which checks this token if it accepts delegation. Also, endpoints check is done between the inner and outer tokens information. Exchange checks the user’s permissions before returning the resources to SP. The configuration steps are : STS trust establishment using the PS cmdlets), Permissions for principal and scenario specific settings.
In the Cloud, MSO-DS synchronizes with the ACS and SP Online trusts the ACS which plays the role of the trust broker. On-Prem, same setup as in the previous scenario. In addition, SP STS synchronizes with the SP Online STS. Also, AD synchronizes with MSO-DS. A user makes a query in SP Online through On-Prem SP. SP issues a STS token to connect to SP Online. Request is sent to ACS for validation. Then SP sends the augmented-token to SP Online STS (containing the e-mail address – SMTP - and the UPN). SP Online accepts the token and returns the resources back to On-Prem SP.
S2S not supported topologies
Cross-premise or cross product S2S calls (SP calling Exchange Online), Cross-tenant scenario (Contoso to Fabrikam), S2S call between SP without AD to Exchange or Lync On-Prem, Web apps using Windows-Classic authentication.
Speaker : Eric Shupps
The SP2013 model still has sites, content, services API, but has now Apps with package, HTML/JS (or other technology) and data. In order for an App to be authorized in SharePoint, it uses OAuth.
Autohosted App model enables SharePoint to automatically deploy the package on Azure. Be careful of the limitations of this model. In Visual Studio, when creating an App for SharePoint, it actually creates two project : the App project and the SharePoint project. The App project is the entity that will be deployed. You can only deploy using Visual Studio in a developer template SharePoint site. A ToketHelper.cs file is automatically created in the VS solution to deal with all the token-related operations and authorization. It is used to get the client context.
An App can consume SQL data using WCF/JSON/XML, SharePoint data using OAuth/REST/CSOM or Office data using HTML/XML.
SQL Azure database tables must have a primary key before deploying it. Azure Virtual Machines have 5 different sizes : XS/S/M/L/ML . They have persistent storage, virtual networking. Web Role is an Azure VM. It can be shared or reserved, 3rd party assemblies can be deployed, TFS/GIT//Web Deploy are also available. Azure Web Sites are free, only contains default asemblies (i.e. WIF is not there and can’t be deployed), TFS/GIT/Web Deploy is also available.
SharePoint Apps only uses HTTPS.
Office 365 Apps only uses HTTPS and needs a unique App ID. In order to “F5” deploy, it has to be a developer template site. Publishing to the Office Store requires an App & Package validation. The Office Store is public, whereas the App catalog is private, and therefore does not require validation or licensing.
Speakers : Mike Ammerlaan, Neil McCarthy
To integrate Yammer in an App, “Yammer Embed” is what to use. It provides support for profile information and also communicates with the Yammer platform. When conversation is started in your application, the embed will also post it on the Yammer network. Conversation can be at the site level or at the item or document level, which then uses the Open Graph protocol.
Yammer is exposed as a set of protocols and APIs that allow to build any kind of application (example of an ASP.NET application). Documentation can be found at http://developer.yammer.com .
An App needs a key (ClientID) and can be proposed in the company’s App Store. From the Yammer web application, your application has to be registered, then keys and tokens are delivered. Some other information have to be filled, such as the website and URIs.
REST APIs supports Messages (be careful of not flooding the feeds of users), Groups, Users, Suggestions, Autocomplete, Search and Networks.
Activity story is composed by an actor, an object and an activity (Robert Red has Updated this File). Activity stories appear in the Activity feed (“Recent Activity). It contains a URL (i.e. to a document), a Type, an Image, a Title, the Name and the e-mail address of the Actor, an Action (i.e. Update. It also supports custom actions), and a Message.
From SharePoint, the Remote Event Receivers can be used to publish activities happening on SharePoint to Yammer. But, in order to work, the OAuth token must be cached before to be reused when calling the Yammer REST APIs. For example, for each update on a document or list item, the Yammer APIs can be called.
User-enabled and Admin-enabled Apps need the non-free version of Yammer. Admin-enabled Apps are useful for posting information on behalf of users (impersonation).
Global Yammer Apps Development steps : Register the app, development, list the App in the network’s directory, describe the App, submit the app to Yammer, list in every network’s directory. The last 3 steps are only required for Global Apps.
To export Yammer data, there is a Data Export API which generates a zipped file containing the data.
Speaker : Spencer Harbar
Almost all the features of SharePoint have to deal with Identity management and the User Profiles. Identity Management is only 10% about technology. One of the primary consideration when talking about Identity Management is “who owns” the data. The other is the quality of the data. Is the data clean or up to date. Another important consideration is, for example, the Active Directory data quality. Sometimes as well, data is stores in lagacy or LOB systems. Also, access to Identity Management data has to be controlled, but for external systems, the question of authorization and authentication comes in the game.
It is really important to work closely with the DS admins as they are at the center of such project. Communication is therefore key. Also, several permissions are needed for the synchronization.
An issue so far was a misunderstanding of the UPA architecture and its features and design constraints are driving the deployment options. 4 key areas that need to be careful with : Security, Privacy, Policy, Operations. Several services are in the scope of UPA : SQL, Distributed Cache, Search, Managed Metadata, Business Data Connectivity.
The goals of the new Profile Sync in SP2013 are performance improvements and a wider compatibility. As an example, for a directory with more and 100’000 users or groups can be imported in 60 hours instead of 2 weeks previously.
Several synchronization “modes” : AD import, UP Sync and custom code synchronization.
Can filter on users and groups (object selection) using LDAP queries (inclusion based, UPS has exclusion based filters). Requires one connection per domain. Support shadow accounts and it is possible to do property mapping as well as account mappings between AD and FBA or others. Replication of AD changes is still needed, but improves the import. There is no cross forest Contact resolution, mapping to SP system properties is not supported. Embedding profile with data from BDC is not possible. Mapping properties with multi-values is not possible. When an AD configuration is changing (schema), a full import is required as well as a purge after the import. The full import can’t be scheduled. AD connections are stored in the Profile DB, whereas the UPS stores them in the Sync DB. Mappings and filters are not moved.
Provisioning UPA and UPS is done in the Manage Service Applications and with PowerShell, but with PS, there is still the default schema issue. Two workarounds : logon the machine using the Farm account, or to change manually the data in the database (not supported).
Some profile properties are automatically in the taxonomy when provisioning the Managed Metadata Service. Indeed, MMS is leveraged by the User Profile import. In order to start the User Profile Service Application, the Farm account has to be put in the Local Admins group. Therefore a warning, complaining that the Farm account is in the admin group, will be displayed in the SP Health analyzer. The recommendation is to enable Netbios if the FQDN and Netbios domain name don’t match, right after the UPSA provisioning.
Planning is the key to success. Remember that if data are rubbish, it will not be better once imported. Health of the AD is very important.
The web front-end servers are still making direct TDS calls to the SQL Server.
Speaker : Scot Hillier
Search should be meant as a data access technology, using client-side code (CSOM) and REST. Windows 8 (for example), can therefore leverage the search of SharePoint.
Managed Properties have other properties to describe what can be done with them. Keyword Query Language has been improved (SQL Query Language no longer exists). KQL allows to make quite interesting queries or even filters (by date). XRANK is a possibility to boost some content in the result, such as pushing some documents to the top of the search result. WORDS enables to make synonyms when searching.
Result Sources are similar to scopes (a subset of the index). It is built using a query-like language, enabling to filter by content type or metadata values. To build the Result Source, a query builder is available from the Site Settings page. It is then used by the Search Result web part.
Query Rules uses words to target some content only (i.e. “deck” would return only powerpoint presentations). It applies on a given Result Source.
Search settings can be exported or imported in a SearchConfiguration.xml file. But, does not contain master pages or web parts. This can be useful to move configuration from an environment to another. The options are directly available in the Site Settings page.
CSQP is not available in Office 365.
Speaker : Daniel Kogan
The new model (Search driven) is about improving the way publishing compared to what was done so far. Search engine can crawl a lot more content than just SharePoint can do. Content can be published wherever it is located. It is also a way to separate the content from the presentation.
Search driven is not about searching for content. It is assembling pages based on the search. SharePoint and external will be crawled to build the index. Some libraries can be declared as a catalog, meaning that their content can be used cross site collection. On the publishing side, there is the term store, the content search web part, managed navigation and publishing pages. Indexed content is published through the webpart framework and the page framework. The idea is also to propose new content to the user, based on previous requested content.
Content Search Web Part
The CSQP executes a search query. The content is skinned for the presentation. The query can be set in a way to return 1 or more result. Therefore, it can be used to display a single article as well. The CSQP can also take parameters, like the term where the user navigated, to drive the search query. One little issue with the CSQP is the search latency. If a 2-minutes crawl latency is an issue, then CSWP is not the good candidate. When editing a page, two choices are proposed. Either the page template or only for a given article or URL. Editing the page for one article will create an individual instance of that page. The CSQP proposes a query builder to set the query itself, and also the refiners and sorting settings. Queries are relying on the managed properties. Query Rules manipulate the way we want to return the result.
It is a UI based query tool to create queries on the index. It is for information worker.
They are pretty technical and little bit more complex in terms of management. They allows to trick the query result, based on the, for example, user. It is more for information architects. They are available from the Site Settings page. Query rules can be stopped. It is in the CSQP that it is set whether the Query Rules should be used or not.
3 steps : go in a catalog and enable it, or a new site catalog (site template); indexing, connection. The search index will “advertise” the catalog. The Manage Catalog Connections page displays the list of catalog available. Connecting asks several questions before making the link to the catalog. On the library side, catalog has to be enabled, what kind of filter could be used and some other settings.
Hierarchies are now a bit different from what was in SP2010. The intended use of the managed terms has been extended for the navigation. Selecting a term set in the term store offers more option, such as the purpose of the term set to be used for navigation. Hierarchies can be different from site collection to another. It is possible to assign a specific page for a given term. Custom properties are now exposed in the UI. In the navigation, Managed Navigation can be selected, needing a term set to use for the navigation.
Cross Site Publishing
The goal of XSP is to author the content somewhere, and to use it from any other SharePoint publishing environment. XSP is not content deployment. Content deployment moves content or artifacts, but not XSP. XSP really reuses the content, staying at the same storage location. It requires the publishing feature to be enabled and the catalog. It can be used when dealing with multilingual environments.
Speaker : Dux Raymond Sy
Warning, this post is absolutely not neutral, as @meetdux is one of my favorite speaker, and once again, I was not disappointed.
Before starting his presentation, he invited the attendees to vote via twitter and to see the result of the poll live on the screen.
When a use comes to you on Monday and tells you “SharePoint sucks”, just answer “SharePoint does not suck. You suck !”. The e-mail is still the most used collaboration tool in the companies. Don’t be misled by the word “Social”. It is not FaceBook and telling what you ate at lunch, but for companies, it is collaboration and working together, but does not give any preferred tools. It does not mean that a wiki or newsfeed has to be used. Wiki or blogs will not solve the collaboration issues that companies are facing. If business says “we need a wiki”, just ask “why ? For what ?”.
Social fails because of a lack of Executive support and ownership. Social should be understood as a way to deliver business value and not as a tool.
1 step : Gain Executive Engagement. It means commitment from the executives. Not a one shot, but on the long run. It has to have financial gains, but not only. It has to promote innovation and engagement of the people to work together.
Step 2 : Develop Relevant Use Cases. Stop pushing tools and features and talk about solutions. Social communication is different from people to people (HR, IT, Marketing, etc). Get the pain points of the users and identify quick wins, using the tools people are used to use or familiar with. Don’t try to get people leave Excel. To support his explanation, he does a demo of an Excel table synchronized with SharePoint 2013, and browsing with a Mac. Another problem with IT is that the speech is not targeting the users and should stop talking about SharePoint, CRM, SAP and so on, and rather talk with the users’ language.
Step 3 : Establish Social Roadmap. Or Enterprise Social Journey. Help people and give them the power. Once users are empowered, they have to engage with each others. But it takes time and intention “Nothing happens by accident or overnight”. Everything can’t be done at the same time, therefore, prioritizing is key. He shows an example of a list of pain points and puts them in front of SharePoint Out-of-the-box features. Then, for each features he puts the priority coming from the people, with an effort to implement the feature and the impact on the business. Also, a rating for the reusability is set. From these values, he can extract the business value for each feature. But, don’t forget to set also the IT impact, in terms of training, support and cost. Don’t forget to make the business pay for the feature. Because it is not free, they will engage more as well.
Step 4 : Identify and Groom Champions. IT or developers can’t answer all the questions. Here comes the need for SharePoint Analysts.
Step 5 : Deliver Sustainable Adoption. Does not mean only training. It is a constant process. Training does not make people experts. People have to practice and work on the solutions. In every implementation, people should be able to help themselves, by having a location where help can be provided or videos published. First raise the awareness of your Champions. Then, get a bigger buy-in beyond the Champions in other departments or group of people. Don’t forget to put a timeline and budget along with the adoption plan. In all the companies, there is a budget line for SAP stuff (training, licenses, etc) whereas SharePoint is only a little line in the Microsoft stuff. Big problem, as it is then impossible to leverage the platform.
Off course, the session finished by a Gangnam style dance session : http://youtu.be/pU-ABZ1AZi4
Speakers : Eray Chou, Keenan Newton
3 architecture options for Apps hosting : SharePoint-Hosted Apps (created in a separated App Web, does not allow server-side code but proxy can be defined in the manifest), for Cloud-Based Apps there are Provider-Hosted App (hosted on-premises) and Autohosted-App (SharePoint automatically and invisibly hosting the App on Azure).
Choosing between Cloud Hosted Apps and SharePoint Hosted Apps : SP Hosted Apps is more for smaller apps and resource storage. If server-side code is needed, go with a Cloud Hosted Apps. Cloud Hoted Apps is also the preferred hosting model in most of the cases.
On the services side, services.svc got extended to support REST accesses, with GET, PUT and POST verbs. Protocol used is now OData. New APIs have been added to the CSOM in order to support SharePoint Server or Windows Phone Applications. The API is now covering the whole set of SharePoint services (i.e. Taxonomy, Workflow, eDiscovery, etc). These last APIs are only available in the SharePoint Server SKU (in the DocumentManagement.dll, Publishing.dll, Taxonomy.dll and UserProfiles.dll). REST is now really the recommended way to call the APIs.
_api is the new alias for _vti_bin/client.svc, such as http://contososerver/_api/web
Results can be in JSON or ATOM format and calls can be tested in a browser. URLs are now mapping objects to resources (_api/web/lists). Even feeds can be queried through REST calls.
Remote Event Receivers are introduced and are meant to be used to call external systems. Their scope are List Item, List, Web or App and supports both Synchronous and Asynchronous After events. The purpose is not synchronization, but more notifications (do not use it to sync or mirror content). They can be defined either by used declarative syntax or via code.
This session, full of demos, was a solid developer one. If not attended, download it when it will be available.
Speakers : Kenan Newton, Rolando Jimenez
This first part is about the basics and the core concepts of the Apps and what makes them working.
With two products and platforms and many different services, how can we make them working together ? Apps is basically making the bridge between the two worlds. To help to discover these Apps, the catalog and store is here for that.
The goal of the Apps is to unify the developer worlds (Office and web development). It is done by relying on standards.
To secure the App on the client-side, it is run in the browser sandbox. For the server-side, it is no longer hosted directly in SharePoint. The API is provided through CSOM, REST, Office JS or SharePoint JS.
Tools mainly used for the development are Visual Studio 2012 or Visual Studio NAPA. But, any tool could be used, such as Notepad or Eclipse.
An example of how to integrate a Bing map in an Excel file and how data contained in the worksheet are used to pin location on the map is shown.
A first thing needed is the manifest, describing, among other things, the permissions required to run the app in Excel. The second element is the Bing html page using jQuery and the Bing map component to be displayed.
The same App can be used in several Office client application (demo of an App taking a table in Word or in Excel to create a SharePoint list in Azure). When starting an Office 2013 App project, the different Office client can be selected to make the App available there.
Provider-hosted means any web server.
Autohosted App allows both client-side and server-side logic. SharePoint deploys then the package to Azure.
App project templates generate a .app and a .wsp, but it does not contain any dll, only declarative code. The .app package contains the wsp as well, the manifest which contains some properties such as the start page URL or the AppPrincipal.
Apps can be App Part, Custom Action or pages (immersive). Apps are hosted in a separate domain to avoid XSS and allow isolation of the App.
Once developed, the SharePoint Apps is published in the App Catalog (or Office Store for Office Apps). Uploading an App package from Visual Studio is simple. Downloading a profile for Visual Studio from the Azure portal is needed before the publishing. Once the App is uploaded in Azure, the manifest needs to be sent to the catalog. When in the catalog, users will be able to see it and install it on their Office.
As the SharePoint Conference 2012 starts with a keynote gathering more than 10’000 attendees from 85 different countries, I will post here the summary of the sessions I will attend.
This year, a little change as these posts won’t be only here, but also on 2 different community blogs :
The SharePoint Bar, because with SharePoint it is always happy hours
And SharePoint Edu Tech, from Dave Coleman, where I will be posting with few others.
So, stay tuned here to get more stuff around SharePoint 2013