+40 256 706 057 [email protected]
Cloud Strategy for Your Organization: Migrating Workloads to PaaS

Cloud Strategy for Your Organization: Migrating Workloads to PaaS

Mihai Tataran General Manager & Partner | Microsoft Regional Director, Azure MVP

Before we begin

This is a continuation of the first article in the “Cloud Strategy for Your Organization” series and focuses on another of the steps we usually take with our customers who migrate to the Cloud.

Click on the image to enlarge

Some of the considerations described in the “Lift and Shift to the Cloud” article apply to PaaS migrations as well, even though they are more focused on general Lift-and-Shift scenarios.

Here we will focus more on the most important architectural decisions one has to make when either migrating an application to Azure PaaS, or creating a new application for Azure PaaS.

Why PaaS?

A very short explanation of why PaaS vs IaaS can be found in the article referenced above. But, to give more details, here is why having applications run in PaaS is better than in Infrastructure as a Service:

  • You don’t need to manage and support Virtual Machines. You simply use services provided by Azure.
  • Better Disaster Recovery mechanisms, since all these services in Azure already have DR incorporated by design.
  • Higher availability. The typical PaaS service uptime in Azure is 99.99%, reaching 99.999% in some cases.
  • Lower cost with Azure: PaaS services are usually cheaper than their equivalent in IaaS (which would be VMs running a piece of software).
  • Access to technology: access to Artificial Intelligence, Machine Learning, Big Data.

Watch a video (in Romanian) where Mihai talks about the cloud strategy and migrating workloads in PaaS, as a follow-up to this article.

 

 

Options and more options

The great thing about the Cloud in general and Microsoft Azure, in particular, is that it provides many options for everything you want to achieve. For example, there are at least 4-5 options to host an application or run code in Azure PaaS: Cloud Services, App Service, Service Fabric, Functions, Logic Apps, etc. What is great with many options is that you have very granular features designed for very specific needs, but the penalty is that you really need to understand them very well, otherwise you might make very bad architectural decisions with costly consequences down the line.

From the architecture perspective, there are at least two major design decisions you need to make:

  1. What kind of architecture does my application have (if it is an existing application and I just need to migrate it to Azure), or what kind of architecture does my application need.
  2. What is the best Azure PaaS option for my application to run on.

Architecture style

Here are some typical architecture styles for Cloud applications:

The first thing you need to do is make sure you understand in what major category does your application fall into.

Decision time

And now you must decide upon which Azure PaaS service to primarily use for your application, depending of course on the architecture style it needs and other business criteria. Here is a great chart which describes a decision tree for this phase:

Migrating to or designing new applications for Azure PaaS has tremendous advantages, but it also means we need to think differently.

Other architectural decisions

There are many other aspects you need to decide upon and here are just a few examples.

Multi-tenant applications

Let’s say your application is multi-tenant, meaning you have more than 1 customer accessing your application. Each customer might access your solution via a specific URL (e.g.: https://customer1.application.com, https://customer2.application.com, etc.), or it might simply be the same URL for everyone.

The first question we need to ask is if it makes sense or not to have a single deployment for all customers or not, considering the simplified scenario that all customers have the exact same version of the application (the same code base). The right-hand side of the picture describes a single deployment for all customers.

Here is why it seems logical to do it: you only must maintain one single application, one version of deployment, for all customers. It appears to be cheaper, easier, straightforward! Or is it?

Here is another way to look at it: what if you have different customers with different expectations regarding uptime and performance? What if, to make it simple, you have some Free / Basic customers (who don’t pay for your solution) and you have some Premium customers (who pay and expect a high quality of service – QoS)? Obviously, if you have one deployment for all customers, in order to offer the QoS needed for Premium customers you end up offering it to everyone. And maybe 80% of resources’ needs come from the Free customers.

So, a more pragmatic approach is to consider the non-functional aspects of your solution, the QoS needed by different categories of customers, and maybe it makes more sense to separate them into different deployments by category. One deployment for Free / Basic customers, one deployment for Premium customers. And then you can allocate more resources only for Premium customers, you can configure that solution to autoscale, etc.

Transient faults

If you start using PaaS functionalities – like SQL Database, Storage, Service Bus, etc. – you need to understand a basic concept: they are offered from a shared environment, and that can cause some unexpected behaviors sometimes. We call these situations “transient faults”, errors which happen because of the environment where our service resides, they have nothing to do with our code, and they will automatically disappear. A specific example of a transient fault is: when another Azure customer using SQL Database from the same physical infrastructure as our SQL Database service, is triggering a query which brings (momentarily) the CPU to 100% – in this case, for a very short time, our queries or commands to our SQL Database will result in a SQL error. The Azure Fabric, of course, resolves the problem very fast, but there is a short time window within which we can have errors which have nothing to do with our application but the environment.

What you must do is design your application code for such events, meaning the code should expect some types of errors or exceptions, which clearly identify transient faults, and act accordingly. One way to tackle this situation is a pattern called Retry Policy, and there is already a framework created for it, called Transient Fault Handling Application Block.

Conclusion

Migrating to or designing new applications for Azure PaaS has tremendous advantages, but it also means we need to think differently: we must understand the Azure services better, what they do and what are their limitations, and in the case of applications’ migration we need to rearchitect or change some small parts of the code.

If you are interested to explore more on this topic, Mihai talks about the cloud strategy and migrating workloads in PaaS in a video available here.

Mihai_Tataran

Mihai TATARAN, Microsoft Azure MVP, is the General Manager of Avaelgo, and Microsoft Regional Director, Microsoft MVP on Microsoft Azure, Microsoft Azure Insider, and Microsoft Certified Professional. Mihai has been teaching Microsoft technologies courses to software companies in Romania and abroad, being invited by Microsoft Romania to deliver many such trainings for their customers. Mihai has very good experience with large audiences at international conferences: DevReach in Bulgaria, Codecamp Macedonia; TechEd North America 2011, 2012 and 2013 – speaker and Technical Learning Center (Ask the Experts), Windows AzureConf. He is also the co-organizer for the ITCamp conference in Romania.

Cloud Strategy for Your Organization: Things You Need to Consider First

Cloud Strategy for Your Organization: Things You Need to Consider First

Mihai Tataran General Manager & Partner | Microsoft Regional Director, Azure MVP

Before we begin

Last year I wrote a series of articles focused on migrating to the Cloud, with examples on Microsoft Azure: on how to start and lift and shift 101. In this article we are going to discuss how to start your strategy to migrate to the Cloud, based on the experienced we’ve got in the meantime, with enterprise customers, working on Microsoft Azure but also on Office 365 and Microsoft 365 migration projects. You may consider the road to the Cloud as a pipeline of steps, a minimalistic set of them being the ones presented in this diagram:

Click on the image to enlarge

In this article, we are going to focus on the very first step, just before actually moving to the Cloud.

Migrating to the Cloud: Options and scenarios

We usually have two types of customers or two types of migration projects:

  • Custom / Bespoke: complex organizations, complex projects;
  • Standard: most of the small and medium organizations can be approached in a standardized way.

Standard

While nothing is really standard in the IT Services world, we have some common methodologies created for similar projects. One example would be migrating to Office 365. There are differences from customer to the customer: they might currently use Exchange Server on premises (maybe 2010 or maybe 2006), they might use a Zimbra email server, they might have the server on-premises or hosted at a co-location provider, etc. But there are some common steps and a common methodology to migrate that customer to Office 365: email server, documents and much more. The same can be applied to projects involving migration to Microsoft Azure, and in the end, our customers benefit from the “Peace Of Mind” standard services suite that we are offering.

Custom

The rest of this article is focused on complex projects or organizations, where we typically don’t only talk about migrating a solution, but a suite of solutions with interdependencies and sometimes the whole IT of that organization.

Watch a video (in Romanian) where Mihai talks about the cloud strategy and how to start approaching the migration into the Cloud, as a follow-up to this article.

Drivers for Cloud migration

There can be many drivers toward such a move and here is a short list.

Efficiency

There are many scenarios where the customer sees huge cost savings. If you consider one of the key attributes of the Cloud, which is that you pay for what you use, the monthly cost of some complex workloads in IT can be much smaller that on premises. Among such scenarios I would enumerate:

  • DevTest: machines for testing, staging, etc. – which don’t need to run 24/7 but a mere few hours per day.
  • On/Off operations, e.g.: salary calculation, 3D rendering, etc. – operations which require computational power a few days per month or a few hours per day.
  • Disaster Recovery

This is another reason for the Cloud, and here is an article on this very subject.

Access to technology

Technologies like: Big Data, Machine Learning, Artificial Intelligence, etc. – are very expensive or simply cannot be installed and managed on-premises because of the complexity they imply. The Cloud is great also because it gives access to such amazing technologies to everyone, in a pay-per-use cost model.

Startup

If you are a greenfield investment or a startup your entire IT infrastructure can be operational in a matter of days. Your email, documents sharing, collaboration tools, your invoicing application, your CRM, your ERP, etc. – all of them can be provisioned easily and fast in the Cloud, without the need to acquire any IT equipment except for employees’ laptops, tablets, and smartphones.

We should not see the Cloud as just another location for some servers. If we only see it like that, we fail to optimize the Cloud usage.

Initial things to consider

It is an IT project, but before starting any actual IT work we should consider a few aspects.

Complexity

Migrating an organization or a set of solutions to the Cloud is not a simple, risk-free project. It takes time, usually months or years, and it impacts many more departments than IT.

Current IT state

From the migration perspective, there is the need to analyze the initial state of the IT infrastructure. Questions like these need to be asked in the beginning:

  • Is there a consolidated infrastructure?
  • Is there a common identity mechanism for all users? Are there multiple identities, Single-Sign-On, Federation mechanisms in place?
  • Are current workloads virtualized, or are they running directly on physical machines? Which virtualization technology is being used?
  • Is the customer already using the Cloud? From which providers? If using Azure, which kind of contract (pay as you go / Enterprise Agreement / CSP)?

Vision

The current state analysis needs to be augmented with envisioning what IT could do for the business if it had the tools. Another key attribute of the Cloud is that it delivers technology which does not exist or is very expensive to have on premises. Aspects like: Big Data, Machine Learning, Artificial Intelligence are such examples, and in this phase, we should discuss with the customer what could be done for the business. Or even simpler than that: you might need a machine with huge computational power or a new piece of software that the company just bought. In the Cloud, provisioning such machines with tens of cores and hundreds of GB of RAM (or even TB of RAM) takes minutes.

Financial

What is the preferred payment strategy? Does the client need a pay-per-use type of contract or a capital multi-year investment? Both are possible, with advantages on each side, and the decision to choose one over the other depends very much on the specifics of every customer.

HR

Some roles within the IT department will need to change. There will be new technologies, new mechanisms to be operated and supported, so a skill upgrade needs to be done. Before that, there is also a paradigm shift: we should not see the Cloud as just another location for some servers. If we only see it like that, we fail to optimize the Cloud usage. In that respect, the IT personnel from the customer needs to go through a mindset transformation before acquiring the specific technical skills needed for the Cloud.

Roles

Roles within the project team must be clearly identified: the customer must understand what their role is, and what is expected from his team before, during and after the migration project.

Buy-in

Especially from top management, but also from all department/business unit leaders who are using the IT systems which will move to the Cloud. A strategy is needed for how the users will be impacted by this change, and what we need to do to help them. The easiest way to get the client’s organization buy-in we found is to start with a pilot or a simple and quick project which delivers immediate benefits, within the first months of the whole program.

Conclusion

This article described just the first step of a Cloud migration program for an organization. There are multiple steps, that will cover in the upcoming weeks. While others are optional, many of them are essential. In the next article, you’ll find out what you need to know about migrating workloads to PaaS.

If you are interested to explore more on this topic, Mihai talks about the cloud strategy and the things you need to consider before actually starting the migration into the Cloud in a video available here.

Mihai_Tataran

Mihai TATARAN, Microsoft Azure MVP, is the General Manager of Avaelgo, and Microsoft Regional Director, Microsoft MVP on Microsoft Azure, Microsoft Azure Insider, and Microsoft Certified Professional. Mihai has been teaching Microsoft technologies courses to software companies in Romania and abroad, being invited by Microsoft Romania to deliver many such trainings for their customers. Mihai has very good experience with large audiences at international conferences: DevReach in Bulgaria, Codecamp Macedonia; TechEd North America 2011, 2012 and 2013 – speaker and Technical Learning Center (Ask the Experts), Windows AzureConf. He is also the co-organizer for the ITCamp conference in Romania.

Becoming GDPR-compliant – Avoidable privacy happenings

Becoming GDPR-compliant – Avoidable privacy happenings

Ioan Popovici

Ioan Popovici
Chief Software Engineer

Last time, I tried to brief some of the steps you need to cover before starting to choose the tools that will help you achieve compliance. Let’s dig a little deeper by using some real-life negative examples that I ran into during this faze.

Case 1. The insufficiently authenticated channel.

Disclosure disclaimer: following examples are real. I have chosen to anonymize the data about the bank in this article, although I have no obligation whatsoever to do so. I could disclose the full information to you per request.

At one point, I received an e-mail from a bank in my inbox. I was not, am not, and hopefully, will not be a client of that particular bank. Ever. The e-mail seemed (from the subject line) to inform me about some new prices of the services the bank provided. It was not marked as spam, and so it intrigued me. I ran some checks (traces, headers, signatures, specific backtracking magic), got to the conclusion that it is not spam, so I opened it. Surprise, it was directly addressed to me, my full name appeared somewhere inside. Oh’ and of course thanking ME that I chose to be their client. Well. Here’s a snippet (it is in Romanian, but you’ll get it):

Of course, I complained to the bank. I was asking them to inform me how they’ve got my personal data, asking them to delete it, and so on. Boring.
About four+ months later (not even close to a compliant time) a response popped up:

Let me brief it for you: It said that I am a client of the bank, that I have a current account, where the account was opened. Oh, but that is not all. They have also given me a copy of the original contract I supposedly signed. And a copy of the personal data processing document that I also signed and provided to them. With the full-blown personal data. I mean full blown: name, national id numbers, address, etc. One problem though: That data was not mine, it was some other guy’s data that had one additional middle name. A thus, a miracle data leak was born. It is small, but it can grow if you nurture it right.

What went wrong?
Well, in short, the guy filled in my e-mail address and nobody checked it, not him, not the bank, nobody. You imagine the rest.

Here’s what I am wondering:

1. Now, in the 21st century, is it so hard to authenticate a channel of communication with a person? Is it so difficult to implement a solution for e-mail confirmation based on some contract id? Is it, really? We could do it for you, bank. Really. We’ll make it integrated with whatever systems you have. Just please, do it yourselves or ask for some help.

2. Naturally, privacy was 100% absent from the process of answering my complaint, even though I made a privacy complaint. Is privacy totally missing from all your processes?

In the end, this is an excellent example of poor legislative compliance, with zero security involved, I mean ZERO security. They have some poor legal compliance: there is a separate document asking for personal data and asking for permission to process it. The document was held, and it was accessible (ok, it was too accessible). They have answered my complaint even though it was not in a timely compliant manner.

Conclusions

0. Have a good privacy program. A global one.

1. Have exquisite security.

2. When you choose tools, make sure they can support your privacy program.

3. Don’t be afraid to customize the process or the tools. We (and, to be honest, anybody in the business) could easily give you a quote for an authentication/authorization solution of your communication channels with any client.

I am sure you can already see for yourself how this is useful in the context of choosing tools that will help you organize your conference event, and still maintain its privacy compliance.

About the author

Ioan Popovici

Ioan Popovici, the Chief Software Engineer of Avaelgo, Microsoft Certified Professional, Certified Information Privacy Professional / Europe, is specialized on Microsoft technologies and patterns and practices with such technologies, acting as the architect on most of Avaelgo’s solutions. He has delivered many trainings to software companies in Romania.

Becoming GDPR-compliant – Tools, Information Security Topics and some Disaster Scenarios

Becoming GDPR-compliant – Tools, Information Security Topics and some Disaster Scenarios

Ioan Popovici

Ioan Popovici
Chief Software Engineer

In the first article of this series, I have briefed some of the main points that need review before thinking about turning your event GDPR compliant and mentioned that in doing so, you will obtain, as a happy byproduct, a nice fingerprint of your event.
Now, as a side note, and as you probably have already figured out, this series of articles is not necessarily addressing those environments that already have a data governance framework in place. If this is your case, I am sure you already have the procedure and tools in place. This series may become interesting for you when we get to talk about some specific tools, information security topics and some disaster scenarios.

There are still some grounds to cover regarding this topic, so let’s go!

Most probably, your main focus in the beginning is: let’s cover some the costs using sponsors, and let’s fire that registration & call for content procedures right away. Now, let’s not just rush into that. In order for you to collect data from participants and speakers (in short), you must have a legal basis for doing that. The legal basis for doing the processing – in this case just collecting it – may not be much of a choice, even though it seems so. In our experience, given the specific of our activity, you may have as a choice: consent, and fulfillment of a contract. Probably you will want to have a homogenous legal basis for all of your participants. Let’s assume the consent as legal basis for processing.

Consent

In order to be provided with consent, you are obligated to notify to the person offering consent several pieces of information:

  • Recipients of the personal data
  • Intention to transfer data to a third country or international organization
  • Storage Period, or criteria used to determine it.
  • How is automated decision making present in processing?

Just to name a few. I will not detail the full challenges of what a consent should be here, because this may become boring to you. You may know all this already. After all, you are already in this business.
Several of these topics are easy to pinpoint if you went to the process detailed in the first article of the series. (e.g. identifying the recipients of the personal data). Still, some of the topics did not derive from that first process.

Establishing Data-Flow and assessing the tools

In order for you to be able to answer some questions like:
Are these data going to travel outside EU? Where exactly?
Are we going to profile anybody, or do some automated decision making?
First, you need to define a data-flow associated with personal data, and even more, start thinking about the tools you are going to use.

Remember, in our first article, we have talked about the need to think about some third party software that may help you with some of your activities? Where does this software maintain its data? Is it outside EU? Can you control this?
You see where I am going with this: formalizing the data-flow, knowing what tools touch your data is of uttermost importance before even asking anybody for consent.

But don’t panic! These are anyway things you needed to do for your event, now, you just need to do them earlier. And if you ask me, just at the proper moment in order to benefit at the maximum from them. You do not want to start thinking about what tools you need when you already have 300 attendees registered by phone. That would be a bummer.

Next time, we are going to take a deeper look into tools and some basic security requirements that we recommend! Be safe!

About the author

Ioan Popovici

Ioan Popovici, the Chief Software Engineer of Avaelgo, Microsoft Certified Professional, Certified Information Privacy Professional / Europe, is specialized on Microsoft technologies and patterns and practices with such technologies, acting as the architect on most of Avaelgo’s solutions. He has delivered many trainings to software companies in Romania.

How build a GDPR-compliant conference or event

How build a GDPR-compliant conference or event

Ioan Popovici

Ioan Popovici
Chief Software Engineer

I am starting a series of articles in which I will try to cover my experience in managing privacy and GDPR compliance for several IT related conference events that we handle here at Avaelgo. During this journey, I will also touch some in-depth security aspects, so stay tuned for that.

As I am sure you know already, a conference is a place where people gather, get informed, do networking (business or personal), have fun, and who knows what other stuff they may be doing. The critical aspect here is that for such a conference to be successful, you need to have a fair amount of people being part of it. Moreover, since people are persons, well, that also means a fair amount of personal data.

There’s a lot to cover, but we will start with the basics. If this is the first time you are organizing such a conference, then you already have a head start: you do not have to change anything. If not, then you must begin by reviewing the processes that you already have in place.

In this first article, I am going to cover what are the key points that you should review. Let’s go:

1. How do people get to know about your event?

It is essential to know how exactly you are going to market your event. The marketing step is crucial, and itself must be compliant with the regulation. This is a slightly separate topic, but it cannot be overlooked.
It does not matter that you will market yourself to participants, speakers, or companies. Personal data is still going to be involved.

2. How are people going to register for your event?

That is, how are you going to collect data regarding the participants? Is there going be a website that allows registration? Do you include phone registration? There are still more questions to answer, but you have an idea about the baseline. These decisions will have a later impact on the security measures you need to take to secure those channels

3. How are speakers going to onboard your event?

Same situation as above, but it may be that there is a different set of tools for a different workflow.

4. How are you going to verify the identity of the participants?

Is someone going to be manually verifying attendance and compare ID card names with a list? Is there going to be a tool? Is there a backup plan?

5. Do you handle housing, traveling for speakers or participants?

If yes, you will probably need to transfer some data to some hotels, airlines, taxies, etc.

6. Do you have sponsors? Do they require some privilege regarding the data of the participants?

This aspect is a big one, as I am sure you know, some or all of the entities that collaborate on your conference will require some perks back from your event. It may be that they are interested in recruitment, marketing, or some other kind of activities on the personal data of your participants. Trade carefully, everything must be transparent.

7. Will you get external help?

Companies/volunteers/software tools and services that will help you with different aspects of organizing the event? What are they going to do for you? If they touch personal data, it is probably good to know before you give it away to them.

8. Are there going to be promotions and contests?

Usually, these are treated separately, and onboarding to this kind of activities will be handled independently, but still, it is a good idea to know beforehand if you intend to do this.
As you can already imagine, this is not all, but we will anyway cover each topic from here in future articles, and then, probably, extend to some more.

All of this may look scary, and it might seem to involve a lot of work, but that isn’t the case. In the end, by trying to tackle personal privacy beforehand, you also get, as a happy byproduct, a cool fingerprint of what you need to do to have a successful event. Cheers to that!

A future article will come soon, covering the next steps. I am sure you can already guess what those are. See you soon!

Before you go

If you want to find out more about GDPR, how it affects your events, company etc. you can register to our free webinars in Timisoara and Cluj-Napoca.

About the author

Ioan Popovici

Ioan Popovici, the Chief Software Engineer of Avaelgo, Microsoft Certified Professional, Certified Information Privacy Professional / Europe, is specialized on Microsoft technologies and patterns and practices with such technologies, acting as the architect on most of Avaelgo’s solutions. He has delivered many trainings to software companies in Romania.
Digital Transformation. How to transform the company?

Digital Transformation. How to transform the company?

Cristian Barsan
Business Developement Manager, Digital Transformation Evangelist @ Avaelgo

1. The culture, the attitude

When it comes to Digital Transformation we all think about new cloud and mobile technologies, disruptive business models and innovative processes. First of all we have to admit that people’s attitude towards transformation of their company is the key for starting such a process.

Business owners are the ones who have the role of creating a culture of transformation. And a transformation is possible only with open eyes and proactive attitude about trying new things. “Fail fast, learn quickly and move forward” may be the mantra for many changes. It’s obvious that you will not risk everything. But what to risk and what to start with?

2. Top-Down vision

It must start from the top – the executive – and filter down and across to all areas of an organization. In the same idea digital transformation represents a mindset change that must fully align the business around the customer.

3. Start the plan with an accurate baseline – button-up process

The challenge is to identify core business systems and processes that continue to provide value and you will not change; and secondly identify which ones stopes business agility because of complex customizations, outdated capabilities and high maintenance costs.

Audit business processes, infrastructure and IT assets with an eye toward financial costs and value to the business. Then perform a gap analysis that looks at demands by the business for new services and the investments needed to meet those needs and support the business strategy.

Integrate the core line-of-business systems and new business initiatives with mobile, cloud, social and data analytics applications.

4. Focus on quick wins

The most important thing is to demonstrate the business value of new investments. Here is the challenge, to rapidly identify meaningful results and prove the correct direction. In this way the CIOs and/or project owner will quickly win the support of top management and budget for the next steps in digital transformation process.

So, start with well-defined projects that promise quick wins and demonstrable results.

5. Hunt for new talent

The new company processes will require new skill sets, including an entrepreneurial outlook that embraces cloud and agile IT systems. Involve HR in your new digital transformation strategy for fueling the business with right candidates now and for the future. Search for new talents, search for new skills.

6. Choose the right partner

Start and execute your digital transformation with an IT/technology partner with proven results, a partner who can be your trusted advisor both short and long term. They should have different capabilities, from strong business processes understanding, to IT infrastructure, technological support and training services for your employees.

In the end do not forget, digital transformation is about people’s courage to change and try new things, about the company’s culture and right mindset.

About the author

Cristian Barsan

Cristian Barsan Cristian has more than 12 years experience in selling business solutions in the IT and B2B environment in Romania. Previously Business Development Manager at Microsoft Romania, Cristian is now Digital Transform Evangelist at Avaelgo. His main responsibilities are the generation of new projects and the development and consolidation of customer relationships, as well as current partners and potential partners relationships.

Pin It on Pinterest