All posts

Migrating to AWS: Lessons from a Tech Lead

Portrait of Graham Reed
March 27, 2024
5 min read
Graham Reed, Technical Lead at Leighton

We hear a lot about new technology and its adoption within IT. However, I don’t think we hear enough about the problems encountered when attempting to integrate it into existing legacy tech stacks within an organisation.

Leighton are involved quite heavily with some of our clients in their endeavours to move much of their ageing tech stack across onto AWS. The list of even the most basic requirements is extensive, and I will attempt below to try to summarise some of the things I have learned as a Technical Lead working to bring new technology into production amid ageing, often essential, legacy systems.

Get the business engaged and onboard as early as possible

A clear understanding of what we are trying to achieve, especially the benefits, should be agreed by both the business owners and stakeholders and all the technical people involved. It helps to know clearly how we are all going to benefit from moving across to a new technology.

It's also worth mentioning that some people may not be as helpful as you hope. They may be unwilling to move from the system already in place as change is almost always challenging. It’s about winning hearts and minds too.

Get people with legacy expertise (if they still exist) onboard as soon as possible

Track down the people in the organisation who know the legacy systems you will have to change or integrate with. If they do still occupy these roles, get their time and engagement agreed as early as possible.

This is probably one of the biggest issues in some organisations. People who were the experts in a particular legacy technology may have moved on and have taken their knowledge with them. As a Technical Lead you spend a lot of time trying to dig out documentation and guides on where legacy systems can be accessed and changed and how to do it.

Flexibility is key

As Tech Leads and Developers, we have got to be flexible and multi-skilled and ready to dig deep into old code bases.

On a recent project we had to make changes to XSLT, Java, Angular, AngularJS, JQuery and Vanilla JavaScript. You have to be able to adapt and know just enough about many technologies, old and new, to begin the process of migration.

Document everything

Every discovery, piece of important info, diagram or understanding – document it. Documentation is my way to offload what is in my head onto ‘paper’.

The very process of doing so helps me to understand it more clearly. Also, when I attend meetings as the Tech Lead I can present the documentation to help remind myself and facilitate the meeting more effectively.

Be prepared for a few false starts

Let’s face it. It’s going to happen. That one system we knew nothing about that suddenly fails because we made a change to a piece of code when we put it live. It’s important that everyone knows that things can easily be missed and not even be known about in large monolithic IT systems.

It’s critical to have the business and tech people engaged with you in the journey.

Build in contingency to roll back for every delivery, however minor

When possible, always have a roll back strategy for every stage of migration.

In my opinion, you just cannot know the full impact of changing a legacy system so that it plays nicely with a new one.

Fail fast has got to be a strategy in many cases.

We recently made a change which caused an application within the call centre part of the business to exhibit an error. We simply had no idea about it until we heard about it after go live of some code changes. It turned out to be due to a very old IE browser version still in use.

Things like this will happen.

And now for some technical learnings around migration of back-ends to AWS...

Rules to microservices principle

Identify all the rules a legacy system uses or applies. This means delving into code, databases and documentation in the old system.

Glean out all the rules then document and define them as a set of possible microservices within AWS with any database requirements.

Has it been done already in another project?

Before doing anything new on AWS, has the organisation already got similar functionality somewhere? Can you re-use it?

Classic candidates are authentication and security, common calls to internal legacy API’s.

Take a safe route and develop as is initially

Unless you categorically know that some piece of functionality can be decommissioned, repeat the same functionality in AWS as in the old. There may be a downstream department, person or system somewhere that depends on it being unchanged.

This mitigates against failing fast, unless you are prepared to use that strategy and see what emerges from the fog….

Don’t create a Lambda jungle

It’s easy to create too many Lambdas. All you’re asking for is trouble later managing deployments and scalability. Combine Lambda functions together if possible. Use event-driven architecture to keep Lambdas decoupled and prevent messy dependencies.

Consider legacy red paths first

This has been my mantra over the last few years. I don’t care half as much about the green paths as I do the red failure paths. Timeouts, event errors, database errors – you name it. Identify all the possible red paths in the old system and get them handled in the new AWS infrastructure as early as possible. Also, use SQS and a dead letter queue!

Streamline and consolidate the AWS infrastructure when it’s proven and working

This can easily just be ignored. Well, it’s working now so let’s move on to the next project. Build in time to consolidate the solution once it’s bedded in. Easier said than done but do you want to leave the improvements you know you should do for someone else? You may also be able to reduce any unnecessary AWS costs.

Get the AWS work peer reviewed and checked

Publish your architecture / implementation for review by experts in the organisation, if it has them. If you are creating an API, make it available via Swagger (or other) for review and consumption.

This includes a complete check that all the functionality in the old legacy system is repeated and useable in the new.

Test it ‘til dawn.

Test it to distraction. Goes without saying. Testing = quality assurance.

Consider support early for the new AWS infrastructure

Build with support in mind from the outset. Consider at every stage how anyone will support your implementation when it is live. How do they fall back in the result of a failure? Do they need to maintain/refresh data? Does it need patches/releases?

Learn and adapt

I’ve never been involved in any project where there wasn’t something I could have done better. Learn all the time.

AWS is particularly appropriate here. There are many ways to do the same thing with AWS. Only through learning and assessing the results of your resource choices and interactions will you find the appropriate set of AWS tools that work together in the best way for your requirements. There are many proven patterns out there and there will be one or two that closely fit what you require.

Share this post
Portrait of Graham Reed
March 27, 2024
5 min read
All posts
Graham Reed, Technical Lead at Leighton

Migrating to AWS: Lessons from a Tech Lead

We hear a lot about new technology and its adoption within IT. However, I don’t think we hear enough about the problems encountered when attempting to integrate it into existing legacy tech stacks within an organisation.

Leighton are involved quite heavily with some of our clients in their endeavours to move much of their ageing tech stack across onto AWS. The list of even the most basic requirements is extensive, and I will attempt below to try to summarise some of the things I have learned as a Technical Lead working to bring new technology into production amid ageing, often essential, legacy systems.

Get the business engaged and onboard as early as possible

A clear understanding of what we are trying to achieve, especially the benefits, should be agreed by both the business owners and stakeholders and all the technical people involved. It helps to know clearly how we are all going to benefit from moving across to a new technology.

It's also worth mentioning that some people may not be as helpful as you hope. They may be unwilling to move from the system already in place as change is almost always challenging. It’s about winning hearts and minds too.

Get people with legacy expertise (if they still exist) onboard as soon as possible

Track down the people in the organisation who know the legacy systems you will have to change or integrate with. If they do still occupy these roles, get their time and engagement agreed as early as possible.

This is probably one of the biggest issues in some organisations. People who were the experts in a particular legacy technology may have moved on and have taken their knowledge with them. As a Technical Lead you spend a lot of time trying to dig out documentation and guides on where legacy systems can be accessed and changed and how to do it.

Flexibility is key

As Tech Leads and Developers, we have got to be flexible and multi-skilled and ready to dig deep into old code bases.

On a recent project we had to make changes to XSLT, Java, Angular, AngularJS, JQuery and Vanilla JavaScript. You have to be able to adapt and know just enough about many technologies, old and new, to begin the process of migration.

Document everything

Every discovery, piece of important info, diagram or understanding – document it. Documentation is my way to offload what is in my head onto ‘paper’.

The very process of doing so helps me to understand it more clearly. Also, when I attend meetings as the Tech Lead I can present the documentation to help remind myself and facilitate the meeting more effectively.

Be prepared for a few false starts

Let’s face it. It’s going to happen. That one system we knew nothing about that suddenly fails because we made a change to a piece of code when we put it live. It’s important that everyone knows that things can easily be missed and not even be known about in large monolithic IT systems.

It’s critical to have the business and tech people engaged with you in the journey.

Build in contingency to roll back for every delivery, however minor

When possible, always have a roll back strategy for every stage of migration.

In my opinion, you just cannot know the full impact of changing a legacy system so that it plays nicely with a new one.

Fail fast has got to be a strategy in many cases.

We recently made a change which caused an application within the call centre part of the business to exhibit an error. We simply had no idea about it until we heard about it after go live of some code changes. It turned out to be due to a very old IE browser version still in use.

Things like this will happen.

And now for some technical learnings around migration of back-ends to AWS...

Rules to microservices principle

Identify all the rules a legacy system uses or applies. This means delving into code, databases and documentation in the old system.

Glean out all the rules then document and define them as a set of possible microservices within AWS with any database requirements.

Has it been done already in another project?

Before doing anything new on AWS, has the organisation already got similar functionality somewhere? Can you re-use it?

Classic candidates are authentication and security, common calls to internal legacy API’s.

Take a safe route and develop as is initially

Unless you categorically know that some piece of functionality can be decommissioned, repeat the same functionality in AWS as in the old. There may be a downstream department, person or system somewhere that depends on it being unchanged.

This mitigates against failing fast, unless you are prepared to use that strategy and see what emerges from the fog….

Don’t create a Lambda jungle

It’s easy to create too many Lambdas. All you’re asking for is trouble later managing deployments and scalability. Combine Lambda functions together if possible. Use event-driven architecture to keep Lambdas decoupled and prevent messy dependencies.

Consider legacy red paths first

This has been my mantra over the last few years. I don’t care half as much about the green paths as I do the red failure paths. Timeouts, event errors, database errors – you name it. Identify all the possible red paths in the old system and get them handled in the new AWS infrastructure as early as possible. Also, use SQS and a dead letter queue!

Streamline and consolidate the AWS infrastructure when it’s proven and working

This can easily just be ignored. Well, it’s working now so let’s move on to the next project. Build in time to consolidate the solution once it’s bedded in. Easier said than done but do you want to leave the improvements you know you should do for someone else? You may also be able to reduce any unnecessary AWS costs.

Get the AWS work peer reviewed and checked

Publish your architecture / implementation for review by experts in the organisation, if it has them. If you are creating an API, make it available via Swagger (or other) for review and consumption.

This includes a complete check that all the functionality in the old legacy system is repeated and useable in the new.

Test it ‘til dawn.

Test it to distraction. Goes without saying. Testing = quality assurance.

Consider support early for the new AWS infrastructure

Build with support in mind from the outset. Consider at every stage how anyone will support your implementation when it is live. How do they fall back in the result of a failure? Do they need to maintain/refresh data? Does it need patches/releases?

Learn and adapt

I’ve never been involved in any project where there wasn’t something I could have done better. Learn all the time.

AWS is particularly appropriate here. There are many ways to do the same thing with AWS. Only through learning and assessing the results of your resource choices and interactions will you find the appropriate set of AWS tools that work together in the best way for your requirements. There are many proven patterns out there and there will be one or two that closely fit what you require.

Watch now!

To watch the on-demand video, please enter your details below:
By completing this form, you provide your consent to our processing of your information in accordance with Leighton's privacy policy.

Thank you!

Use the button below to watch the video. By doing so, a separate browser window will open.
Watch now
Oops! Something went wrong while submitting the form.
All posts
Graham Reed, Technical Lead at Leighton

Migrating to AWS: Lessons from a Tech Lead

We hear a lot about new technology and its adoption within IT. However, I don’t think we hear enough about the problems encountered when attempting to integrate it into existing legacy tech stacks within an organisation.

Leighton are involved quite heavily with some of our clients in their endeavours to move much of their ageing tech stack across onto AWS. The list of even the most basic requirements is extensive, and I will attempt below to try to summarise some of the things I have learned as a Technical Lead working to bring new technology into production amid ageing, often essential, legacy systems.

Get the business engaged and onboard as early as possible

A clear understanding of what we are trying to achieve, especially the benefits, should be agreed by both the business owners and stakeholders and all the technical people involved. It helps to know clearly how we are all going to benefit from moving across to a new technology.

It's also worth mentioning that some people may not be as helpful as you hope. They may be unwilling to move from the system already in place as change is almost always challenging. It’s about winning hearts and minds too.

Get people with legacy expertise (if they still exist) onboard as soon as possible

Track down the people in the organisation who know the legacy systems you will have to change or integrate with. If they do still occupy these roles, get their time and engagement agreed as early as possible.

This is probably one of the biggest issues in some organisations. People who were the experts in a particular legacy technology may have moved on and have taken their knowledge with them. As a Technical Lead you spend a lot of time trying to dig out documentation and guides on where legacy systems can be accessed and changed and how to do it.

Flexibility is key

As Tech Leads and Developers, we have got to be flexible and multi-skilled and ready to dig deep into old code bases.

On a recent project we had to make changes to XSLT, Java, Angular, AngularJS, JQuery and Vanilla JavaScript. You have to be able to adapt and know just enough about many technologies, old and new, to begin the process of migration.

Document everything

Every discovery, piece of important info, diagram or understanding – document it. Documentation is my way to offload what is in my head onto ‘paper’.

The very process of doing so helps me to understand it more clearly. Also, when I attend meetings as the Tech Lead I can present the documentation to help remind myself and facilitate the meeting more effectively.

Be prepared for a few false starts

Let’s face it. It’s going to happen. That one system we knew nothing about that suddenly fails because we made a change to a piece of code when we put it live. It’s important that everyone knows that things can easily be missed and not even be known about in large monolithic IT systems.

It’s critical to have the business and tech people engaged with you in the journey.

Build in contingency to roll back for every delivery, however minor

When possible, always have a roll back strategy for every stage of migration.

In my opinion, you just cannot know the full impact of changing a legacy system so that it plays nicely with a new one.

Fail fast has got to be a strategy in many cases.

We recently made a change which caused an application within the call centre part of the business to exhibit an error. We simply had no idea about it until we heard about it after go live of some code changes. It turned out to be due to a very old IE browser version still in use.

Things like this will happen.

And now for some technical learnings around migration of back-ends to AWS...

Rules to microservices principle

Identify all the rules a legacy system uses or applies. This means delving into code, databases and documentation in the old system.

Glean out all the rules then document and define them as a set of possible microservices within AWS with any database requirements.

Has it been done already in another project?

Before doing anything new on AWS, has the organisation already got similar functionality somewhere? Can you re-use it?

Classic candidates are authentication and security, common calls to internal legacy API’s.

Take a safe route and develop as is initially

Unless you categorically know that some piece of functionality can be decommissioned, repeat the same functionality in AWS as in the old. There may be a downstream department, person or system somewhere that depends on it being unchanged.

This mitigates against failing fast, unless you are prepared to use that strategy and see what emerges from the fog….

Don’t create a Lambda jungle

It’s easy to create too many Lambdas. All you’re asking for is trouble later managing deployments and scalability. Combine Lambda functions together if possible. Use event-driven architecture to keep Lambdas decoupled and prevent messy dependencies.

Consider legacy red paths first

This has been my mantra over the last few years. I don’t care half as much about the green paths as I do the red failure paths. Timeouts, event errors, database errors – you name it. Identify all the possible red paths in the old system and get them handled in the new AWS infrastructure as early as possible. Also, use SQS and a dead letter queue!

Streamline and consolidate the AWS infrastructure when it’s proven and working

This can easily just be ignored. Well, it’s working now so let’s move on to the next project. Build in time to consolidate the solution once it’s bedded in. Easier said than done but do you want to leave the improvements you know you should do for someone else? You may also be able to reduce any unnecessary AWS costs.

Get the AWS work peer reviewed and checked

Publish your architecture / implementation for review by experts in the organisation, if it has them. If you are creating an API, make it available via Swagger (or other) for review and consumption.

This includes a complete check that all the functionality in the old legacy system is repeated and useable in the new.

Test it ‘til dawn.

Test it to distraction. Goes without saying. Testing = quality assurance.

Consider support early for the new AWS infrastructure

Build with support in mind from the outset. Consider at every stage how anyone will support your implementation when it is live. How do they fall back in the result of a failure? Do they need to maintain/refresh data? Does it need patches/releases?

Learn and adapt

I’ve never been involved in any project where there wasn’t something I could have done better. Learn all the time.

AWS is particularly appropriate here. There are many ways to do the same thing with AWS. Only through learning and assessing the results of your resource choices and interactions will you find the appropriate set of AWS tools that work together in the best way for your requirements. There are many proven patterns out there and there will be one or two that closely fit what you require.

Download
To download the assets, please enter your details below:
By completing this form, you provide your consent to our processing of your information in accordance with Leighton's privacy policy.

Thank you!

Use the button below to download the file. By doing so, the file will open in a separate browser window.
Download now
Oops! Something went wrong while submitting the form.
By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. View our Privacy Policy for more information.