Is Really the Cloud “Just Someone Else’s Computer”?

Is Really the Cloud “Just Someone Else’s Computer”?

Mihai Tataran
General Manager & Partner | Azure MVP

I have been reading many materials, articles, etc. saying basically that „there is no Cloud, it’s just someone else’s computer”. Some of those articles, for instance, can be found here or here. These and others alike vary from nonsense to technical explanations about why it does not really matter if you host your solutions and data on your own server, or in the Cloud.

I manage Avaelgo, where we are at the forefront of the Cloud movement, with customers from Romania to EU and the US, from small online businesses to major banks. Having this kind of exposure and experience, I cannot stress enough how strongly I disagree with these opinions. And in this post, I will explain why.

The Five WHYs for Cloud

There are many reasons a Public Cloud provider is very different from a hosting organization, or from your own datacenter for that matter. I will enumerate just a few of those reasons.

Utility services vs fixed costs. Or OPEX vs CAPEX.

The Cloud is a revolution in the business model, as much as a revolution in technology. Probably one of the primary reasons for this is that with the Cloud you can pay for almost exactly how much you use. You make a subscription, you use as much as you need, and at the end of the month you pay for the service. No upfront investment in servers, no need to hire your own team of system administrators, or, your team will be smaller or able to bring more value to your business, doing something else than creating and administering basic infrastructure.

The same happened with the big change in electricity supply at the end of the 19th century when because of some technological innovations made by Tesla’s company, electricity started to be supplied on long distances. Before the alternating current, the electricity could not „travel” more than a few dozens of meters, hence if you needed electricity you had to have your own generator in your backyard. But that changed with Tesla and his company, who started to rent out electricity, hence reducing the overall cost and much more important allowed people and businesses to get electricity in a subscription model, without the big initial investment in a generator.

The same is happening with IT. Of course, as it happened with electricity – where we still see a lot of reasons for businesses to have their own generator (backup, security, etc.) – the same happens with the IT infrastructure: there are many scenarios where you still need your own.

One big – very big – trap, which I hear too many times when discussing with executives is that the Cloud is more expensive. hey look at website, and see that a particular Virtual Machine (VM) costs X EUR per month, then they check Microsoft Azure website and see that a similar VM costs X or X + 5% EUR per month, and draw the conclusion that Azure or the Cloud is more expensive. Without deep knowledge of the Cloud, without understanding what you are looking at, you fail to realize that you are actually comparing apples to oranges. Let me dive into this example, as simple as it is:

Features with price included Hosting provider VM Microsoft Azure VM
Disaster recovery NO (or sometimes) YES
Backup (ability to recover the actual VM state) NO YES
Networking services NO (or some) YES (e.g.: Load Balancer)
Cost flexibility NO (you pay full price per month) YES (you pay per minute of usage)

Simply put: a VM in the Cloud gets you more features, usually less administrative work, and the ability to pay per minute of usage instead of paying for a full month independent of the actual usage.

Prediction vs on demand infrastructure

Trying to estimate your future need of your IT infrastructure is very hard, when you want to build it on your own in your datacenter or at a traditional hosting provider. Say you launch an e-commerce website or an HR system for your company, and you need to evaluate how many servers you need to keep up with the demand. You either overspend – you make sure your IT infrastructure covers the highest demand peaks – or you don’t have sufficient resources for the peaks and you experience downtime or lose business.

What if you could have your solution scale on demand, on its own, based on your settings? Say, for my system, we start with 4 machines, but if the number of HTTP requests waiting in the queue (not being served) per second gets above 100, I want the system to expand with one more machine, and when the HTTP requests queue gets at 0, I want the system to decrease with 1 machine. Is it better that trying to guess? Of course, it is, and this is what the Cloud is about. A real Cloud Provider (and I said before, in my opinion, there are 3 real players, and many followers) gives you this ability.

Economy of scale

The 3 major Cloud Providers have huge economies of scale. We are talking about almost 40 Microsoft Azure regions (each region with at least 2 datacenters for failover), totaling millions of servers. They buy servers by the container, they buy a lot of Internet, they generate their own power, they use water for cooling, and then water is supplied as a utility to major cities nearby the datacenters. So they are much more efficient than any smaller hosting provider.

Why is this important? For at least two reasons:

  • Because of this capacity they can serve us when we need it. For example, within some limits, creating a VM in Azure takes you a few minutes. Creating 100 VMs in Azure takes the same number of minutes.
  • Because the price is going down. I just explained the technical reason for it (scale, getting bigger and bigger), and there is also the competition from AWS and Google.


Yes, security. A few years ago many IT managers could not have imagined the words „Cloud” and „Security” in the same sentence. There are many reasons why your data and your application is more secure in a Cloud provider, than on your own premises or at a smaller company.

Why is a Public Cloud provider (usually) more secure than you or than a hosting provider? In short, because:

  • Compliance and certifications
  • Scale
  • Technology

Compliance and certifications

In the very complicated internet and connected world we live in, with a lot of cybercrime for-profit activity from shady organizations, with cyberthreats poised by government agencies, and so on – it is harder and harder to maintain a decent level of security. That is one of the reasons why we have now standards like ISO 27018 or General Data Protection Regulation (GDPR), and the Public Cloud providers make a constant effort to keep up with them. They are complicated, they require many resources, and they imply that the entity who complies to them has a default level of security which is much higher than entities who do not comply.


Because Cloud providers have so many customers, they are exposed to many security attacks every single second of the day. The more companies have assets in the Cloud, the more attacks we see against the Cloud. Microsoft, for example, is able to continuously improve their ability to identify, prioritize and respond to the threats, by many measures one of which being threat intelligence. They simply employ Machine Learning techniques on the multitude of data (actions inside and outside their infrastructure), being able to detect patterns and engage predictive measures.

In simple words, if your assets reside in such a Cloud provider, you benefit out of the learnings they got from attacks on other customers of the same Cloud provider. And this usually happens in real time.


At least 2 of the major Cloud providers (Microsoft and Google) are also major technology providers. What do I mean by this? Well, if you are a software company, or an infrastructure company, or a company which offers hosting services, chances are you work with Microsoft or Google technology. They created or own some things like Windows, Windows Server, Hyper-V, Go, or even more applied to this topic Microsoft Intelligent Security Graph. These guys built the very products they are using to provide services to us. Whereas a hosting provider is using the products and technologies from those guys.

Who do you say is more qualified?

Operational capabilities

Besides the technology itself, one of the key factors to successfully run an infrastructure needed for a solution is the operational capability. Even if all the Cloud providers are constantly learning, and they still experience outages or downtimes, I would say chances are they are much more capable to operate an infrastructure than most companies, hosting providers included.

Just think about these things, which could have a major impact on your day to day business:

  • Backup and Disaster Recovery. I already wrote about this here, not going to go into details now.
  • Monitoring, diagnostics, and analytics. On your own premises and often at hosting providers, you must setup and manage those, while the Cloud providers offer you built-in monitoring tools.
  • Automation. With technologies like Azure Resource Manager (ARM) it is extremely easy to setup an infrastructure, based on reusable templates, and then automate these tasks with tools like PowerShell.


There will always be a place for hosting providers, especially because the degree of customization they can offer, and there will always be the need to have some infrastructure on your own premises. But, the Cloud has a very strong proposition, both from the technical and the business perspective, it is here and it is gaining momentum, and the reasons not to be in the Cloud are shrinking every month.

About the author

Mihai Tataran
Mihai Tataran, is the General Manager of Avaelgo, and a Microsoft MVP on Microsoft Azure, Microsoft Azure Insider, and Microsoft Certified Professional. Mihai has been teaching Microsoft technologies courses to software companies in Romania and abroad, being invited by Microsoft Romania to deliver many such trainings for their customers.
Top 4 Security Dimensions for a Successful Software Development

Top 4 Security Dimensions for a Successful Software Development

Ioan Popovici
Chief Software Engineer

Having a job that requires deep technical involvement in a prolific forest of software projects certainly has its challenges. I don’t really want to emphasize the challenges, as I want to talk about one of its advantages: being exposed to issues regarding secure software development in our current era.

Understanding these four basic dimensions of developing secure software is key to starting building security into the software development lifecycle.

Dimension Zero: Speaking the same language

The top repetitive problem that I found in my experience, regardless of the maturity of the software development team, is the heterogeneous understanding of security. This happens at all levels of a software development team: from stakeholders, project managers to developers, testers and ultimately users.

It’s not that there is a different understanding of security between those groups. That would be easy to fix. It’s that inside each group there are different understandings of the same key concepts about security.

As you can expect, this cannot be good. You cannot even start talking about a secure product if everybody has a different idea of what that means.

So how can a team move within this uncertain Dimension Zero? As complicated as this might seem, the solution is straightforward: build expertise inside the team and train the team in security.

How should a final resolution look like at the end of this dimension? You should have put in place a framework for security that lives besides your development lifecycle, like Security Development Lifecycle (SDL) from Microsoft for example. Microsoft SDL is a pretty good resource to start with while keeping the learning loop active during the development process.

Dimension One: Keeping everybody involved.

Let’s assume that a minor security issue appears during implementation of some feature. One of the developers finds a possible flaw. She may go ahead and resolve it, consider it as part of his job, and never tell anyone about it. After all, he has already been trained to do it.

Well… no!

Why would you ask, right!? This looks counterintuitive, especially because “build expertise inside the team and train the team in security” was one of the “dimension zero”’s to go with advice.

Primarily because that is how you start losing the homogeneity you got when tackling Dimension Zero. Furthermore, there will always be poles of security expertise, especially in large teams, you want to have the best expertise when solving a security issue.

Dimension Two: Technical

Here’s a funny fact: we can’t take the developers out of the equation. No matter how hard we try. Security training for developers must include a lot of technical details, and you must never forget about:

  • Basics of secure coding.
    (E.g. never do stack/buffer overflows, understand privilege separation, sandboxing, cryptography, and …unfortunately many more topics)
  • Know your platform. Always stay connected with the security aspects of the platform you are developing on and for.
    (E.g. if you are a .NET developer, always know its vulnerabilities)
  • Know the security aspects of your environment.
    (E.g. if you develop a web application, you should be no stranger of XSRF)

This list can go forever, but the important aspect is never to forget about the technical knowledge that the developers need to be expsosed on.

Dimension Three: Don’t freak out.

You will conclude that you cannot have a secure solution within the budget you have. This can happen multiple times during a project’s development. That is usually a sign that you got the threat model wrong. Probably you assumed an omnipresent and omnipotent attacker. [We all know you can’t protect from the “Chupacabra”, so you shouldn’t pay a home visit.]

This kind of an attacker doesn’t exist… yet. So, don’t worry too much about it, focus on the critical aspects that need to be secured, and you’ll restore the balance with the budget in no time.

Instead of a sum up of the 4 security dimensions of software development, I wish you happy secure coding and leave you a short-but-important reading list:

About the author

Ioan Popovici
Ioan Popovici, the Chief Software Engineer of Avaelgo, Microsoft Certified Professional, trainer, is specialized on Microsoft technologies and patterns and practices with such technologies, acting as the architect on most of Avaelgo’s solutions.
Migrating To The Cloud: How To Start

Migrating To The Cloud: How To Start

Mihai Tataran
General Manager & Partner | Azure MVP

The Cloud is not just a buzzword. It is one of the most innovative technologies we are living, and it is part of a profound transformation trend together with things like Virtual Reality, Machine Learning, Artificial Intelligence, just to name a few.

I write this article because:

So I will describe the common fallacies we have encountered during the talks with potential customers, and how we mitigate them.

Fallacy 1: The Cloud is just another word for co-location or hosting

It might seem so if you just scratch the surface, but it is wrong. Here are just a few reasons why I consider the Cloud a huge paradigm shift:

  • Utility costs less even if it costs more or how to pay for what you use. One might compare the cost per unit of time of a Virtual Machine from a hosting provider, with the cost of a VM from a Cloud provider. And the VM in the Cloud might seem to have the same price. Yes, but in the Cloud, you do have great mechanisms which allow you to pay only when you use it, and not pay when you don’t. First, there is the commercial model which counts pricing per minute of usage, which no hosting provider does (at least not that I know of). Second, you have the tools (automation, etc.) which enable such Start/Stop actions very easily.
  • On-demand is better than prediction or how not to lose business. Forecasting the needed IT infrastructure for a solution is estimative. You either end up paying for more IT infrastructure than you need, or your infrastructure is not sufficient at the load peaks and you lose business. Just think about Black Friday, and consider that there are also “mini-black Fridays” every week for some businesses. What if your infrastructure could scale automatically based on some restrictions and configurations that you have done? For example: “if my number of web requests per second exceeds 1000, scale up with 1 machine, etc.” This is what the Cloud is about: elasticity.
  • Real-time computation or how to access tremendous amount of computing power instantly. Very often we see complex solutions which need huge computing resources for a limited amount of time. E.g.: salaries and benefits software, credit risk analysis, etc. The traditional approach is to invest in the IT infrastructure required to run such software even if it sits unused for 90% of the time. With the Cloud, you can activate/provision the required infrastructure within minutes, use it for as much as you like it, and then stop it. The Cloud offers this flexibility and speed for getting huge resources fast, and then dismissing them.
  • Become a data-driven company. Many enterprises sit on enormous amounts of data, which is not stored, categorized, and analyzed properly. Mostly because having Big Data analysis tools on premises is extremely expensive and hard to set up. You know exactly what I am talking about if you ever considered installing a Hadoop Cluster, or even manage a SQL Server Parallel Data Warehouse system. You require diverse skills (IT administrators, DevOps, Database admins, etc.) and it costs a lot. In the Cloud, you have such amazing technologies delivered as a service: first, you do not have the hassle of setting up the infrastructure, and second, you pay per use. You have hundreds of terabytes of data and you need to analyze it? You might want to try Azure HDInsight, or Azure Analysis Services – just to give some examples from Microsoft.

There could be other reasons, but I think they are enough to describe why the Cloud is such different.

Fallacy 2: It is hard to migrate to the Cloud

Yes, moving to the Cloud is not just a walk in the park. Especially if you consider moving the entire infrastructure or core solutions.

That is why we always recommend a step by step approach. While we try to give our customers a longer-term vision, we begin with a simple pilot project which brings immediate results. So, we do talk about cost savings in a 3-5 year period, but we start with a project which is cost effective in a few months, is sustainable from the budget perspective, and does not present enormous risks. Scenarios to start with can be many, but we have seen these most of the time:

  • Dev/Test: create Dev, Test, Staging etc. environments where the software development process becomes much more efficient and you see an immediate cost benefit.
  • Backup and Disaster Recovery: have backups of the most sensitive data in the Cloud, or even create a secondary site (active or not) in the Cloud, which could be turned on in case of a disaster in your primary infrastructure. I encourage you to read my article on Disaster Recovery.
  • Lift and shift: without benefiting from all the possible services in the Cloud, we take a workload from on premises and we move it to the Cloud as close to 1:1 as possible. This is a low risk, fast, sub optimal move to the Cloud.
  • Analytics on existing data: you already have data being collected from different sources, but for some reason (cost, complexity, etc.) you are not performing enough analytics on it.

After the successful project, you get a few benefits: there is an early win, your team gets some Cloud specific know-how, and you can further build on it.

Fallacy 3: The Cloud is not secure

Actually, one might be thinking about two different aspects: Data Privacy and Security.

Most of the relevant Cloud providers are doing a good job aligning to the data protection legislation in EU. Microsoft is the case I know best, and they have become a certification machine. A lot of technical details here. On top of this, Microsoft is the single Cloud provider who offers regions (groups of data-centers) located in EU and which are operated by local companies (more exactly in UK, Germany and France). (I only consider AWS and Google alongside Microsoft as real competitors in the Cloud today – I know I might upset some people, but IBM, Oracle and others are kind of niche players or very small compared to the other 3).

As per security, we must consider the fact that a Cloud provider is facing millions of attacks per day. They are facing them, and are learning from them as well. Think about it this way: any new type of attack is analyzed (using Machine Learning) and all customers of that Cloud providers benefit from these findings. As opposed to you staying on your island where you get no specific protection for sophisticated attacks. This is why the actual way to see things should be: “I need to go to the Cloud because of security”. More information about how Microsoft is acting on security here.


The Cloud is here, and you should think about using it because of the huge benefits it could bring. Yes, it is not an easy path migrating to the Cloud, but it has been done by many, there is a lot of expertise on how to do it, and you can take it step by step.

About the author

Mihai Tataran
Mihai Tătăran is the General Manager of Avaelgo, and a Microsoft MVP on Microsoft Azure, Microsoft Azure Insider, and Microsoft Certified Professional. Mihai has been teaching Microsoft technologies courses to software companies in Romania and abroad, being invited by Microsoft Romania to deliver many such trainings for their customers.
Creating a Culture of Business Continuity

Creating a Culture of Business Continuity

Diana Tataran
Marketing Professional

How often do disasters happen?
And how can they possibly affect my business?

How often do disasters happen? And how can they possibly affect my business?

Let’s start by defining “disaster” in terms of business.

Well, disaster is just about anything that disrupts your normal business operations.

From a cyber attack to adverse weather.
From fire to an unplanned IT outage.
From human error to transport network disruption.

Luckily, Business Continuity has moved beyond the basic recovery of technology and facilities and is focused on protecting the reputation and business value, whenever an organization is threatened by unexpected events.

Start planning for just about anything that may put your business operations at risk!

Backup fast, recover faster than before, and plan your business continuity!

Learn more about how you can protect your business and assure it’s continuity here.

About the author

Diana Tataran
Diana Tătăran is a marketing professional at Avaelgo. Previously, Diana worked as software developer for 7 years, and as project manager for 2 years. Between 2008 – 2011, Diana was recognized as a Microsoft Community Influencer for her contributions within the IT community.

Pin It on Pinterest