Speaker : Sanjay Narang, Luca Bandinelli
The requirement is to build an internet facing site, highly customized and the needs to be always on. So, minimum downtime and data loss. At which level high availability is needed ? This has also to cover the natural disasters, which implies different data center location. Azure has a connectivity time of 99.95%. If you want more, the solution has to be designed accordingly.
O365 is the place to go for collaboration, but it is not the case for Internet scenarios. Therefore, Azure is the good option and is able to scale on-demand. SharePoint solution on top of Azure is a Microsoft supported solution. Specific features, such as blob storage, fast cross-dc transfer will be very useful.
The solution is based on two different farms in two different Windows Azure regions, using a custom log shipping jobs for data synchronization (and not SQL Always-On). Also, traffic manager will be used.
Content and Management database will be synchronized. Search will have 2 search services, one for the production, one for the DR.
Virtual networks is a challenge as they are restricted to a single datacenter. Also, an AD cannot span multiple DCs. Therefore, each farms will be in different domains, preventing the use of SQL Always-On. Also, a domain trust has to be setup.
The primary farm in a Windows Azure will have an affinity group, in which a virtual network will be defined. Different cloud services will be defined containing the virtual machines. But, each of these elements need to be always available, using an availability set. For front-end servers, Windows Azure Load Balancer can be used. For SQL Server, an Always On Availability Group will be setup, with an Availability Group Listener Group. But, this implies having all the clients in a different Cloud Service. For custom log backups, blob storage will be used.
The DR farm is similar to the primary farm. The custom log shipping job will take the backup from the blob storage. The content DBs and MMS DB are read-only and not part of Always-On AG. The search is created separately and crawls the read-only content DBs and must be scheduled outside of the restore window time.
Custom Log shipping is required on both farms. The backup and restore commands will use an URL for the storage. The challenges of having two farms with different AD is that accounts are different from one farm to the other. Doing a backup/restore will therefore not work. The DR required accounts must be added. Once it is done, it has to be backed up and restored on the primary farm, thus containing the accounts of the DR farm.
For search, log shipping can't be used. Having a separate search services allows to keep the SLAs and not requires to copy the indexes. But, having this setup makes the search analytics not usable (at the global level).
The main component enabling failover is the Azure Traffic Manager. Requests will always be directed to the primary endpoints while it is available. A custom job will poll the TM to check whether the target endpoint has changed. When the primary farm goes down, the TM detects it and redirect the request to the DR farm, which is read-only. The custom job also detects it as well and pauses the restore job to enable read-write accesses. TM takes 90 seconds to detect a farm is not available. When the TM has switched to the DR farm, we need to prevent it to come back to the primary farm when it is back online, as this farm is no longer primary.
Issue now is that when the DR is permanently switched, there is no DR anymore. It has to be rebuilt, similarly to how it was done for the original DR farm. During the patching, the DR can be used temporarily, but, think about SLA, as the DR will be read-only. Consider also using the Content Delivery Network to cache the pages and other content.
Speaker : Sonya Koptyev, Greg Lindhorst
After the announcement of the InfoPath discontinuation, it was expected to have a session quite full, and it was indeed the case. Many people seeking for information about the future of Forms.
4 main scenarios were presented.
Excel Surveys, with which questionnaire can be designed and proposed to the users for filling. For each question, a column is added in the Excel worksheet. The different data types are supported and the editor is simple to used.
A brand new feature, which was apparently showed for the first time : FoSL (Forms on SharePoint List). This feature, available from the ribbon, next to the InfoPath "Customize Forms" button, is opening an editor showing the already available fields, coming from the list. The designer allows the user to place the fields where he wants on the design surface, and also to resize them. In list editing mode, the form is displayed with a user interface that is similar to the one used in Access Services.
Another way to publish forms is to use structured documents, in other words, a Word document containing fields.
The last possibility is App forms or Access Services.
All the presented solutions are for information workers, and do not use developments or code (no CSR, LightSwitch or Visual Studio)
Currently, the alternatives are multiple, from Nintex or Formotus, just to name two of them.
A roadmap presented for the next year, and the features are not yet frozen as the community inputs are very welcome. InfoPath will stay for a while, and will be supported until 2023.
There is currently no migration tool or techniques, and Microsoft is thinking about what can be done.
Speaker : Rafal Lukawiecki
Data mining is about exploring and finding correlations between data. It also can be used to do predictions and to find patterns. But, predictions does not mean predicting the future. Predicting the future means making strong assumptions that nothing will change around you.
Predictive analytics is understanding the customers and building effective marketing campaigns.
In order to do data mining, the data must have some structure, having attributes, flags, etc. But, you have to flatten the data or de-normalize the data structures, which means potentially a lot of data with a lot of different columns.
As an output, there are analysis, such as a risk of fraud or happiness. Another output can be just clusters or groups.
3 steps are necessary, defining the model (input and output), train the model, and validating the results that is likely the most important.
From the data, the data mining engine feeds a mining model.
On the backend, SQL Server with Analysis Services are required, starting with the version 2008. Starting 2012, SSAS comes in two flavor : multidimensional and tabular. But for data mining, no cube is needed.
On the frontend, only Excel is needed plus the free Data Mining add-Ins. The data for the Data Mining Add-Ins must reside in the Excel sheet. SQL Server Data Tool might be used to manage data mining projects. Additionally, SQL Server Management Studio may be helpful as well.
For model validation and statistics, R is the reference (http://cran.r-project.org/), bringing additional statistics tools no available in Excel or SQL.
An excellent presentation with an excellent enthusiastic speaker !
Speaker : Dan Holme
Why SharePoint 2013 delivers business value and decrease risks ?
SharePoint 2010 is a great product, but not for the world we are living today. Since 2007, some revolutions appeared. People became at the center of the attention, with the mobile devices and social networking. The cloud also emerged in that period. From those trends, Microsoft developed SP2013 and Office 365. SharePoint 2010 is so 2006.
Instead of upgrading, move forward. Basically, an evolution of the workload and not a big upgrade project, because the latter deliver little or no value.
But, at some point, migration will be needed (for support reason maybe).
He does not advise to wait for the Service Pack or a specific version. The main reason is that Microsoft is also running SharePoint internally, which makes the product more stable and reliable from day 1 and from the first version. Service Packs are now just a cumulative update, introducing new features. Stop giving justification for waiting and really move forward.
Business does not wait and have evolving needs that need to be answered quickly. If IT can’t deliver, business will go around them.
Migrating to SP2013 should be done by adding services to the current SP2010 implementation. For example, by deploying the Search Center or the My Sites. Web Content Management and Mobile Devices access are two examples of big drivers to migration of the workload to the new platform.
This leads to a hybrid solution, with SP2010 and SP2013 side-by-side, each achieving different business goals.
When SP2007 is still in place, several factors need to be considered (decision tree). One of them is if SP2010 can answer to business needs, chances are that it can be done better in SP2013. At the end, there is no real business that would benefit in sticking in SP2010.
Upgrade is dead. Migrate instead. There is no In-place upgrade, a new farm has to be deployed, doing a database migration (for example). SharePoint 2013 keeps the 14 hive from SharePoint 2010 and content from that hive would still be available and still works. The user may not even see the migration to SP2013, by staying in SP2010 mode. The reason for this compatibility is the Microsoft had also to use it for Office365, to avoid having users suddenly seeing the 2013 user interface overnight.
The sequence, therefore is : build the servers, deploy the SP2010 customization, deploy and upgrade the services, migrate to claims (new features are highly relying on claims), upgrade the content databases and site collections. Don’t forget to backup and test everything. This can be in a quite short amount of time (example of 12TB of data migrated in a 4 days week-end). If something work in SP2010, it should work on SP2013. From a technical perspective, it is not possible to go directly from SP2007 to SP2013. At step in SP2010 has to be made. Don’t stay too long on SP2010 and move quickly to SP2013. Moving from 2007 to 2013 can be done with a 3rd party tool though.
Database attach upgrade should also be considered when deploying a Service Pack, starting with a clean farm and then migrating the database to the new farm, instead of an in-place upgrade.
For Cloud services, there are different kind of Cloud : SaaS with O365, IaaS with Winows Azure, managed IaaS where the management of the infrastructure is outsourced, and the Private Cloud. Truly said, “private cloud” is a new wording for “on-premises”.
Team sites are typically things that can be deployed in the Cloud, such as O365. The same for extranet scenarios or social features.
Public-facing websites, full trust solutions or development environments are more typically deployed in a IaaS.
Moving to one type of cloud or another depends of the workload, not everything must be migrated to a same cloud. Currently tools and guidance are still incomplete, and there is no magic button to move to the cloud. Hybrid service architecture challenges should be addressed early (before the business comes with a burning need). Also, architect the on-premises environment implementation to reflect O365, for example by separating the customized solutions from the out-of-the-box implementations. Building customizations for the cloud as much as possible is crucial too. Use the full-trust solution only when necessary.
This will end up with a hybrid solution, in terms of cloud type (O365, IaaS, on-prem), versions (2010, 2013), edition of SharePoint and services.
Title : Getting Started with SharePoint 2013
Author : Robert Crane
This book explores the very first steps of SharePoint 2013, using a standard team site. It starts with an explanation of how to use document libraries, calendars and some other type of libraries or lists. Then, it finishes with the search and the recycle bin.
Book Review :
For the price of the book, there was not risk in having a look and reading the book. Unfortunately, it stays at the very basic level of the usage of only some of the library and list types. Yes, it explains how to upload a file, how to recover a file from the recycle bin, but, from my point of view, most of the things described in this book can be discovered by a user exploring the platform. Moreover, it stays explaining some of a team site features. In my opinion, this book can be skipped, and a reader that wants to explore SharePoint 2013 should rather go directly with a book like SharePoint 2013 for Dummies (which I haven't read yet) that will go beyond what Getting Started with SharePoint 2013 goes.
Title : SharePoint 2013 – Planet of the Apps 2.0
Author : Sahil Malik
SharePoint 2013 comes with a new development model, based on Apps. This book goes through the different kind of Apps, giving examples of each of them, explaining what Apps are and building the next examples on top of the previous App. It is an introductory book and it is not intended to be an in-depth one, going in all the details of Apps development. This is understandable looking at the topic and how vast it is.
Book Review :
The very good thing is that the book is written in such a way that you read it fast. It is not a 600 pages paving stone and to give an overview of SharePoint 2013 Apps, it is perfect. It starts with a really simple App, a SharePoint hosted, and going further, adds complexities and ends with Server-to-Server type of App, talking about permissions, Azure ACS and many aspects that a developer starting putting his hands in Apps development should know. That said, as some subjects are complex, some parts of the book should be read carefully and some time should be spent to really understand some notions before going forward to the next example or chapter. Additionally, the writing style is nice and Sahil uses a good humor to help digesting some topics.
For me, this is the book to start with (well, at the same time, I haven’t read many of Apps development book so far; it is coming…), giving the first steps to develop SharePoint 2013 Apps. It is short and long enough to get a nice understanding, and finally, it is fun. And remember, “Hash is legal in Amsterdam (almost)” (ref. to the first version of the book).
After the setup of a new SharePoint 2013 environment, I started testing it by creating a really simple SharePoint-Hosted App, like a basic “Hello World” App. For this environment, I am using a Visual Studio 2012 development machine remotely from the SharePoint 2013 box. In order to test this very simplistic application, I just pressed F5 to launch the VS debugger and I landed on the SharePoint 2013 page, and was able to see my App in the quick launch menu. But when I clicked on the link, I got a nice “The resource cannot be found” (404), as show in the picture at the beginning of this post.
I checked several times the SharePoint 2013 App settings, such as the “App domain” URL and the “App prefix”, and they were correct. I also checked the DNS settings and the bindings to the IIS site and everything was perfect.
During my troubleshooting, I saw that deploying manually the App was working perfectly. It means that there was a difference between a deployment using VS and a manual one, or, the execution of the App.
Googling a bit, I found this post on the Microsoft forums : http://social.msdn.microsoft.com/Forums/en-US/appsforsharepoint/thread/188d78d8-8c35-46df-8770-695d1258ad18/
In this long thread, people are mentioning that they were adding a colon to the loopback IPv6 address that VS is adding in the hosts file (located in %Windir%\System32\drivers\etc ), making the address ::1 invalid. This indeed worked for me, but raised another question. VS was adding two IP addresses for the same host :
Clearly, ::1 is the IPv6 equivalent of 127.0.0.1, but my App was not running locally, but on the 10.180.128.195 server. So, why the IPv6 was wrong and not equal to my SharePoint 2013 IPv6 address ?
While in debug mode, I replaced the ::1 address by the real IPv6 address of my SharePoint 2013. And………it worked like a charm.
So far, my theories are (and be cautious, because they need to be confirmed), coming from many different tests I did :
- By adding a colon to the loopback IPv6 address, it makes it invalid (rfc5952). This causes my development machine falling back to IPv4 to connect the server.
- The reason why VS adds the loopback IPv6 instead of the correct one is likely because it cannot resolve the host name with IPv6. And rather than not adding any entry, it adds the ::1 address.
As also written in the MSDN forum, to avoid having to every time manually change the hosts entries while in a debugging session, disabling IPv6 is a good way, and most probably not an issue for most of the people.
Last November, during the SharePoint Conference 2012, I bought the Surface RT and so far, I’m really happy with it (and don’t want to change for a Pro version).
But, yesterday, when I took it to read my favorites blogs, I saw that there was no WiFi connection. I was still able to see the dozens of other non-hidden WiFi of my neighbors, but, none of the 3 SSIDs (2 of them are visible, but protected) that I use. Switching on and off the flight mode didn’t help, so I tried the option to refresh the Surface, and after about 30 minutes, still nothing.
The day after, I took it and tried to connect it to another WiFi network and there was no issue. I took this opportunity to download and install all the available updates, in case one of the could fix a potential bug (knowing that many people have issues with WiFi connectivity with the Surface). Back at home, no luck. Still not able to connect my hidden SSID or the two others.
Trying again my search engine, I found this superuser.com post, about an issue connecting a Windows 7 computer on a WiFi router. One of the answer was to change the WiFi channel used by the router. Then, I refined my searches, and found this interesting thread on the XDA developers forum. Here, it says that US Surface RT can’t connect to channels 12 & 13.
Looking at my router’s WiFi settings, I saw that it was in “Automatic Mode” and setting it to a specific channel other than 12 and 13 made my Surface finding the network again. Honestly, I didn’t checked if it was confirmed and that US Surface RT really can’t connect to channels 12 and 13, but at least, it works for me, and maybe for others as well.
So, if, like me, you bought your Surface RT in the US, check your router WiFi settings if you loose your wireless network and try to change the channel used…
After two weeks of vacations in Las Vegas and around, after the SharePoint Conference (@SPConf, #SPC12), I had time to think about what I found good and less good at this event. Even though it was not mandatory to let that amount of time to write a wrap-up, doing a step back from the conference was not bad in order to make the things more objective. So, in this post I will review the different aspects of the event, from the location, through the content.
When, in Los Angeles last year, it was announced that in 2012 the SharePoint Conference would take place in Las Vegas, I immediately thought to register. And for mainly two reasons. First, I really love Las Vegas, and, second, it meant that something great would happen. Even if most of the attendance knew that a new release would come, not that many people thought that SharePoint 2013 would have came out in such a short period of time. Since the SPC09, I was convinced that one of the reason of going back to Vegas was because of a new version of the products (a bit of intuition as well). The Mandalay Bay is a great place for such huge event, even if I’m not sure it is the only Convention Center able to contain that many rooms for sessions and 10’000 people. Moreover, the number of places where parties or dinners can be made is unbelievable. Taking a room in the same location (either the Mandalay Bay or TheHotel) is also a good decision, as going from the room to the Convention Center takes already 15 minutes, staying in another hotel would take really more time (even from the Luxor, count 10 more minutes). So, an advice for the people that wonder if it worth being in the same hotel as the conference, the answer is : YES.
In 2011, it was the first real time I used Twitter and I was blown. For a long time I was wondering about the reason or the purpose of such tool. Basically, is that really useful ? A conference is a really good example of usage of this tool. The communication medium was perfect to propagate information, room changes or other important news. Moreover, communication to and from the conference organizer (@SPConf) was radically easier. Moreover, it fosters people to interact with others and exchange experience or news. On the real life side, this year was amazing. I saw and was involved in many discussions, with people I’ve never met before, or people I exchanged with on Twitter or only via e-mail. One could see people really discussing and making new friends. The Community Hub, setup by Joel Oleson (@joeloleson) and Mark Miller (@EUSP) was a great success. A lot of people were passing by the booth to meet and it was very dynamic, showing that the SharePoint community is not only a word, but is also very active. Here, my advice to people that would be new at a (SharePoint) Conference would be : don’t be shy, engage with others and take a good bunch of business cards.
With the partner events and the Microsoft Tuesday event with Bon Jovi, there was many occasions to party all along the week. Passport party, red party or green party participated in the lot of fun that people were able to get. They were great events. Nevertheless, it goes fast and being able to meet each of your friends during a week is definitely hard. Goes too fast. And, also, don’t go to sleep too late, you would pay it off during the next days of the conference. A special mention also to Erica Toelle (@ericatoelle) and her #SPCSuite idea, that was a lot of fun.
Once again, the organization was great. Even if there was 10’000 attendees, the registration went smooth, almost with no waiting time. During the course of the week, I haven’t seen big problems. If I had to give an area to improve, it would probably be the room’s session allocation. Several times, sessions were packed and it was no longer possible to enter the room. An idea to avoid these situations in the future is maybe to enforce the registration to the sessions. On the other side, I was really surprised to see the lack of tolerance of some people. Indeed, I could see once a blue-shirt lady explaining hundreds of times that the session was full and that no more people could enter. Suddenly, one or two guys insulting her so much that she had to leave, shocked. I don’t understand this kind of behavior from an attendee. It is like shooting to the messenger. I would also like to thank all of these blue-shirts people for their guiding information and their kindness during the week.
Another point to improve is the break between the sessions, that was not long enough, at least from my own point of view. 15 minutes to go from the lower-level up to the 3rd level, with some many people in the corridors, it is too short. I am open to have sessions starting earlier in the morning and finishing later in the evening, making the breaks longer, as I know that many people are not attending the same kind of session (business, developer or IT) the whole day, but rather switch from one to another. Splitting the types of sessions by floor was very good though.
Organizing the catering for 10’000 people is really serious and can’t be improvised. Here as well, thumb up for the organization. I’ve never waited more than several minutes to reach the buffet and then to find a seat. Again, we should not be afraid of joining a table with already many people around. It is a very good way to make new friends or to network a bit. For non-american people, the food can be......different, but it was ok.
Seeing my colleague coming back from the Build Windows conference with a Surface and a Lumia 920 got me mad. Ok, not that much, but still. Of course, we could not imagine that Microsoft would leak 10’000 Surface to the attendees. Instead, we got a backpack and a bottle. I can’t count the number of bottle I got from conferences. The first day of the conference, I immediately thought that a nice swag would have been a 3G SIM card for smartphone. At least, people would have been able to twit or get access to the internet (see my last paragraph).
Since the keynote, it was obvious that 3 main topics that would be addressed during the week : Social Networking, the Apps and the Cloud. And we were not disappointed. Almost. In reality, when discussing with colleagues and friends, it appeared that several sessions were similar. Moreover, there was some lack of in-depth sessions, most likely because of the recent release of the platform. I didn’t attend sessions that were really bad, but watching few videos and also gathering some feedbacks, it appeared that many presentations were not enough prepared or rehearsed to avoid bugs or issues on stage. Another thing : when promoting the “all-in-the-Cloud” strategy, it works better when there is a network, but I will come back later to this specific point. Indeed, many demos failed because of the connectivity. I was also lucky to mostly attend great sessions held by top speakers. When you attend a session with Andrew Connell or Eric Shupps (@eshupps), it is a guarantee that you will have good time. But according to some other attendees, some of them were not that lucky. On the other side, it is true that in some occasion, I attended sessions where the title was not really aligned with its content. The strategy I adopted was to mainly select the sessions according to its presenter. My favorite presentation, because of its originality and also because it was really spectacular, was without doubt the “Zero to Live in 60 minutes using SharePoint 2013 Publishing”, with Andrew Connell, Daniel Kogan and 4 others Microsoft Program Managers.
Wifi (because it deserves its own section)
Finally, and even if it was “heard loud and clear” by the organizer, the wifi connectivity was not even bad, but awful. I understand, ensuring the wifi for 10’000 people is not that easy, but there was already a warning in Los Angeles where attendees were complaining that the wifi was not reliable. Also, nowadays, both technology and people are able to support such a big number of connections. At the end, 2 days without connectivity is simply not acceptable. Sorry. Moreover, the last day of the conference, connectivity was also lost for the exhibitors. Again, when promoting the Cloud, there is a bit of an issue.
Last but not least, I would also like to thank Dave Coleman (@davecoleman146) for offering his platform for blogging to some people like me, and was happy to meet Dave and discuss a bit during the conference. I hope to see again the people I met or missed during the conference in the course of next year or at the next SharePoint Conference (not yet announced).
Definitely, the SharePoint Conference, is THE conference to attend when working with SharePoint.
Speaker : Mirjam Van Olst
With SP2007, web templates required to create a site definitions to be deployed in the 14 hives. Changes to the template needed a change to the site definition.
From SP2010, web templates can be changed afterwards even if sites were already created based on that template. Templates were also saved as a wsp files.
A web template can be Site and Farm scoped and uses the WebTemplate feature element. Only the onet.xml and elements.xml files are required. Site Template and Web Template can now be used interchangeably. They appear as the same to the user, no difference. “Save as a site template” creates a sandboxed solution, is stored in the site collection gallery and can be imported into Visual Studio. But, the import is difficult and it is probably better to create a new site definition. Web Templates are based on a site definition. It does not inherit from its based site definition. Saving a publishing site as a template is not supported.
Some Web template limitations : feature stapling can’t be used, and variations neither (would be the only reason why going for a custom site definition).
Web Template provisioning steps, first creates the URL for the site. Secondly, GLOBAL onet.xml file is provisioned. If site collection scope Web Template, the site collection features will be activated in the order they are declared in the onet.xml. For sub-site scoped Web Template, a check that site collection features are activated has to be done. Then, site scoped features are activated in the order they are defined in the onet.xml. Finally, list instances are created. If a feature needs to modify a list, as no list exist, it can’t be done like this. Therefore, list creation should be done during the feature activation (event receiver).
Web Template requires some properties : BaseTemplateName, BaseTemplateID and BaseConfigurationID. When starting a Web Template, it is recommended to take a copy of the Out-of-the-box onet.xml and to strip it rather than starting from scratch. A Web Template onet.xml can only contain one configuration. Configuration ID has to be 0. Modules element is not supported in onet.xml.
Recommendations : use only enough features and limit the number of features that need to be activated (slowness of site provisioning). Be careful, site scoped features can block sub-site creation.
Two ways to deploy Web Template : Farm solution, or sandboxed solutions. Farm solution way makes the Web Template being available in the whole environment. The last solution will keep the onet.xml and elements.xml files will be stored in the content database. When using Sandboxed solution, it can also be deployed in Office 365. In most of the cases, everything done in a Web Template can be put in a Sandboxed solution. But, make sure that the solution can be removed after a site has been created.
Web Template can be used from code to provision new sites, using the web template feature GUID and name, separated by the sharp sign. It is also a good thing to store the web template and version of the property bag (<PropertyBag> element) of the site.
A webtemp file can be linked to several site definitions.
Apps for SharePoint must be self-contained.
The domain of the App Web is different from the one the user is browsing (Host Web). Creating App Web can be done starting from the App-0 site definition. It is also possible to create the App Web using a custom Web Template. It is deployed in the App itself in a web scoped feature. It has to be defined in the appmanifest.xml file.