Avanti site and satellite: 4, 3, 2, 1, 0 weeks to launch
Design. UX. Build. Content - how we nailed a new site for a satellite operator in just one month.
Article | Simon Smith
From about the 70s through to the mid-2000s, the prevailing approach to hosting the data and applications for a business was to own or co-locate a datacentre with dedicated hardware.
An organization would have complete control over the performance and configuration of systems and would know where their data is stored at all times. Dedicated teams were employed to design, build, maintain and monitor these systems. If it sounds expensive, it’s because it was.
If you didn’t have the resources to do this, you could rent individual dedicated servers in a specific datacentre run by a hosting company. This would normally be under contracts of at least a year. This approach removes the need to have any expertise in designing, building and maintaining expensive hosting hardware infrastructure. The minimum contract period would ensure that any hardware outlay by the hosting company was more or less covered over the term; you were still committed to the expense, just indirectly and in one easily understandable amount.
Some companies offered Virtual Private Servers (VPS). These were shares of physical machines, isolated from one another by virtual machine technology. You would get remote access to what appears to be your own dedicated server, except the physical resources are shared with other users who you have no knowledge of or control over. This was cheaper than a dedicated server, at the expense of loss of control over performance. Contracts were typically only months long.
Figure 1 - Classic co-located hosting
In 2006, Amazon launched their Amazon Web Services (AWS) subsidiary, renting out time on computers ‘in the cloud’ (i.e. somewhere out there) to anyone who would pay. AWS’s compute resource could be treated as a black box – users get access to run their programs and store data, but the operation of the hardware behind it all was completely hidden.
On the face of it, this doesn’t differ much from the concept of a dedicated server or a VPS. However, the critical differentiation is in the billing and commitment – you could pay by the hour, with no minimum contract period. You could spin up an instance for a couple of minutes if you want, although you would be billed for a full hour (more granular billing is available now).
This fine-grained billing with no minimum commitments has since transformed the way businesses approach their core computing and storage needs. These days it is rare that a company would build a dedicated datacentre for themselves or keep their own hardware in a co-location facility. The liability of the physical infrastructure and the personnel to maintain it makes it unattractive in most use-cases.
When I joined Freestyle over 7 years ago, we hosted our clients’ solutions in a traditional co-location facility, handily, not that far away. Despite the proximity, it would be rare for us to visit the racks, instead managing a VMware virtualised set of machines from the comfort of our offices. The maintenance of the equipment was managed by the hosting company, but we were directly liable for the costs of any failures or any required expansion. All of this meant making improvements that required expansion was rather unpalatable. It was the capital outlay required to expand our hosting infrastructure that ultimately led us to look for alternatives. If we could spend smarter here, we could pass on savings to our clients or invest further into our products.
At the time, there were 2 major offerings that caught our eye – AWS and Microsoft Azure. AWS was far more mature than Azure, with Microsoft’s offering having launched in 2010, and critically offered us familiar concepts of virtual machines with virtual networking that we could configure however we wanted, with familiar terminology. It offered us the opportunity to replicate nearly exactly what we already had, but in an environment where we could expand or contract at a moment’s notice. The barrier to adoption was low. AWS was the right choice at the time.
So, over the period of a couple of years, we migrated our clients from the old to the new, copying the configuration as close as we could whilst also implementing some best practices around separation of concerns and durability/availability. Site-to-site VPN connections kept our administration networking secure, while we could tap into AWS’s seemingly endless resource of networking capacity and IP addresses whenever needed for public access.
Moving to AWS allowed us to greatly reduce the ongoing costs to our clients and meant that there were no upfront infrastructure costs for new projects other than some configuration. We still had to maintain the virtualised servers, installing, configuring, updating and repairing the software that ran on them. We stuck with this model for several years.
Figure 2 - Infrastructure as a Service
What if we could further reduce hosting costs, and reduce the time spent on maintaining the operating systems, database servers and web servers that power our clients’ solutions?
Moving away from the virtual server approach, known as Infrastructure as a Service, and towards a much finer-grained infrastructure and billing model, known as Platform as a Service, is the key.
With PaaS, the software that supports the custom applications is invisible, save a few configuration options. Updates, bugs, low-level configuration etc. is handled by the hosting company. They provide a platform on which to build applications.
AWS offers the option to use either IaaS or PaaS. However, for developers who work with Microsoft’s tools like Visual Studio and technologies like .NET and SQL Server, Azure is far more attractive. Desktop tools like Visual Studio are tightly integrated, offering quick and simple deployment and debugging options when developing on Azure. There is an abundance of documentation for integrating and cross-connecting their platform features.
Figure 3 - Platform as a Service
Aside from the tight integration of tools, there is a well-designed and reliable web-based interface to manage resources, plus APIs to enable the automation of management. As the relative newcomer, compared to AWS, Microsoft has had the advantage of being more agile in their management interface – not having the burden of large swathes of customers relying on early flawed interfaces. AWS has been playing catch-up on their web console, having been the trailblazers and consequently making the mistakes that later cloud implementers could avoid.
Freestyle is primarily a Microsoft house, although we do have strong capabilities in other areas such as WordPress. Our partner Optimizely (recently rebranded from Episerver) uses Azure extensively as the basis for their DXP Digital Experience Platform with Content Cloud (CMS), Commerce Cloud (E-commerce) and many of their other products. As such, a move to Azure for Freestyle makes a great deal of sense.
All new studio projects built on .NET that Freestyle hosts are deployed to Web Apps in Azure. We have automated the build and deployment of these projects using Team City and Octopus Deploy. We encourage clients to use Content Cloud to host their CMS implementations, as there are many benefits to doing so. And now we are in the process of re-architecting and migrating our Freestyle Partners Digital Asset Management System to take advantage of the Azure platform.
So, over time we have gone from self-hosting in a datacentre, to IaaS in AWS, and now we are moving almost all our projects to PaaS. Through each phase of the journey, we have realised the cost and efficiency benefits – reduced maintenance, eliminated capital costs, and improved efficiency. These are all passed on to our clients through lower costs and faster time to product release.
In the next article, I will go into more detail about how we are modifying and migrating Partners (our Software as a Service offering, or SaaS) to exploit the features of PaaS, and highlight some of the differences and limitations of the platforms.