Abstract
Software firms are increasingly using and contributing to DevOps software. Many companies now use pipelines for better efficiency. To do this they need to understand and work with pipeline resources and processes. This paper will investigate the continuous Integration (CI) and continuous delivery (CD) in AWS code pipeline and the impact it has on building, testing and deploying. AWS code pipeline will connect to a GitHub repo. AWS code pipeline allows for automated rapid delivery, easy integration, configurable workflow which allows integration with third party tools like GitHub, Jenkins and Docker. AWS code pipeline uses IAM to manage who can make changes to pipeline. Parallel execution can be used in AWS code pipeline to model a build, test, and deployment to run parallel to increase your workflow speeds.
General Terms
Pipeline, CI and CD
Keywords
AWS Code Pipeline, Continuous Integration and continuous delivery
1. Introduction
This paper is an introduction on how to use Amazon Web Services (AWS) in DevOps. AWS resources can decrease time to market and reduce costs for companies. The paper will discuss a specific tool called AWS code pipeline, it is a relatively new tool which was released in July 2015. AWS code pipeline is a continuous integration, continuous delivery and continuous deployment service. It automatically builds, tests and deploys applications and services into the cloud which allows for less risk of errors etc.
Get Help With Your Essay
If you need assistance with writing your essay, our professional essay writing service is here to help!
In today’s world applications must evolve quickly for customers. Improving and releasing software at a fast pace to customer needs to be at the core of every business. Making time and agility to market essential to maintaining your competitive advantage. Companies that can rapidly deliver updates to applications and services, innovate and change faster to adapting changing markets which give better results to business and customers.
With AWS code pipeline, companies can deliver value to their customers quickly and safely.
2. Continous Integaration (ci)
“Continuous Integration doesn’t get rid of bugs, but it does make them dramatically easier to find and remove.” [1] Continuous integration is a widely established coding philosophy and set of practices that drive development teams to create small changes and to check code in repositories frequently. Most applications require developing code in different platforms, systems and tools. Development teams need a mechanism to integrate and validate its changes for continuous integration to work [2].
The goal of continuous integration is to establish a consistent and automated way to code, build, and test applications. With consistency and efficiency in the integration process, teams are more likely to commit code changes more frequently, which leads to better collaboration and software quality [2].
Continuous integration is based on several key principles and practices:
- Maintain a single repository
- Automate the build
- Make the build self-testing
- Every commit should build on an integration machine
- Keep the build fast
- Test the production environment in a clone
- Easy for anyone to get the latest executable version
- Everyone can see what’s happening in the repository
- Automate deployment
Some of the costs of using continuous integration are time investment and the quality and quantity of tests. Building an automated test requires a lot of work. If a developer is on a small team, they may not have the time to invest in this sort of infrastructure. Stability of tests may be messy due to timing issues or errors and tests themselves may be hard to write [3].
Setting up a continuous integration system might be daunting, it ultimately reduces risk, cost, and time for rework. If a developer introduces a bug and detects it quickly, it’s far easier to get rid of [3][5]. Since you’ve only changed a small bit of the system, you don’t have to look back far and only a small number of changes are lost. This results in much more frequent automated testing and releases.
Continues integration also improves collaboration and quality. It does this in many ways with many different tools.
Slack: Slack is a cloud-based set of proprietary team collaboration tools and services. Can link to GitHub.
Asana:A web and mobile application designed to help and improve collaboration. Teams organize, track, and manage their work. Asana simplifies team-based work management. Also links to GitHub
3. Continous delivery (cd) AWS code pipeline
Continuous delivery picks up where continuous integration ends. Continuous delivery automates the delivery of applications to selected infrastructure environments. Continuous delivery (CD) is a software engineering approach in which teams produce software in short cycles, ensuring that the software can be reliably released at any time. It aims at building, testing, and releasing software with greater speed and frequency.CD can quickly iterate on feedback and get new features to users faster.
A typical CD pipeline includes many of these steps: [2]
- Pulling code from version control and executing a build.
- Executing required infrastructure steps that are automated as code to stand up or tear down cloud infrastructure.
- Moving code to the target compute environment.
- Managing the environment variables and configuring them for the target environment.
- Pushing application components to their appropriate services, such as web servers, API services, and database services.
- Executing any steps required to restarts services or call service endpoints that are needed for new code pushes.
- Executing continuous tests and rollback environments if tests fail.
- Providing log data and alerts on the state of the delivery.
Continuous delivery has four main benefits:
- Continuous delivery automates the software release process by letting teams automatically code, build, test, and prepare for release to production so that your software delivery is more frequent and efficient.
- Continuous delivery helps teams be more productive by freeing developers from manual tasks and encouraging behaviors that help reduce the number of errors and bugs deployed to customers.
- Continuous delivery allows teams to more easily perform additional types of tests on your code because the entire process has been automated.
- Continuous delivery helps teams deliver updates to customers faster and more frequently. When continuous delivery is implemented properly, you will always have a deployment-ready build artifact that has passed through a standardized test process.
4. AWS code pipeline
AWS Code Pipeline is a automated continuous integration and continuous delivery service that enables developers to model, visualize, and automate the steps required to release your software. With AWS Code Pipeline, teams model the full release process for building your code, deploying to pre-production environments, testing your application and releasing it to production. AWS Code Pipeline then builds, tests, and deploys your application according to the defined workflow every time there is a code change. You can integrate partner tools and your own custom tools into any stage of the release process to form an end-to-end continuous delivery solution.
4.1 Why should Teams use AWS code pipeline
By automating your build, test, and release processes, AWS Code Pipeline enables you to increase the speed and quality of your software updates by running all new changes through a consistent set of quality checks. AWS Code Pipeline also allows teams to work on their continuous integration and continuous delivery skills.
4.2 Pipeline Concepts
Figure 1: Aws code pipeline concept
A pipeline is a workflow that describes how software changes go through a release process using continuous delivery. You then define the workflow with a sequence of stages and actions.
A revision is a change made to the source location in your pipeline. It can include source code, build output, configuration, or data. A pipeline can have multiple revisions flowing through it at the same time.
A stage is a group of one or more actions. A pipeline can have two or more stages. All stages must have unique name.
An action is a task performed on a revision. Pipeline actions occur in a specified order, in serial or in parallel. The first stage must only contain a source action.
The stages in a pipeline are connected by transitions and are represented by arrows in the AWS Code Pipeline console. Once revisions are complete, actions in a stage will be automatically sent on to the next stage as indicated by the transition arrow. Transitions can be disabled or enabled between stages.
The pipeline structure has the following requirements:
- A pipeline must contain at least two stages. The first stage of a pipeline must contain at least one source action and can only contain source actions.
- Only the first stage of a pipeline may contain source actions.
- At least one stage in each pipeline must contain an action that is not a source action.
- All stage names within a pipeline must be unique.
- Stage names cannot be edited within the AWS Code Pipeline console. If you edit a stage name by using the AWS CLI, and the stage contains an action with one or more secret parameters (such as an OAuth token), the value of those secret parameters will not be preserved. You must manually type the value of the parameters (which are masked by four asterisks in the JSON returned by the AWS CLI) and include them in the JSON structure.
- The pipeline metadata fields are distinct from the pipeline structure and cannot be edited. When you update a pipeline, the date in the updated metadata field changes automatically.
- When you edit or update a pipeline, the pipeline name cannot be changed.
5. Setting up code pipeline
In 2017 AWS announced support for Amazon Elastic Container Service (ECS) for AWS Code Pipeline. This support makes it easier to create a continuous delivery pipeline for container-based applications and microservices. Amazon ECS and AWS CodeBuild will be used in the creation of a pipeline. We can then integrate AWS Code Pipeline and CodeBuild with ECS to automate your workflows in just a few steps.
To create a pipeline in AWS you first need to register. Aws code pipeline is free for the first month. After creating a AWS account, you could go straight to AWS code pipeline and create a pipeline straight away. When creating the pipeline, there are 6 steps to make before a pipeline is created. This is done to make sure AWS has everything it needs to create a pipeline for you. These 6 steps are:
Step 1: Create the name of the code pipeline.
Step 2: Create a source location. In this instance GitHub was chosen. When GitHub is selected, you then must connect AWS code pipeline with GitHub and declare what branch you wish to use. In this case, master was chosen.
Step 3: In step 3, we must build the pipeline, in this instance AWS CodeBuild was selected as build provider. AWS CodeBuild is a fully managed continuous integration build service that compiles source code, runs tests, and produces software packages that are ready to deploy. With CodeBuild, you don’t need to provision, manage, and scale your own build servers. CodeBuild scales continuously and processes multiple builds concurrently, so your builds are not left waiting in a queue. Jenkins could also have been selected but I wanted to learn more about AWS tools.
After the build tool was selected, we then had to configure our project by creating a new build and naming our project in the pipeline with a description of the project.
After build we move to the environment of the project. The environment image was managed by AWS CodeBuild.
Figure 2: setting environment image
Ubuntu was selected as the operating system and Docker was selected as runtime environment. Below is a quick summary of everything in environment form.
- Build provider: AWS CodeBuild
- Project configuration: production build
- Environment image: image managed by AWS CodeBuild
- Operating System: Ubuntu
- Runtime: Docker
- Version: aws/codebuild/docker:1.12.1
- Build specification: buildspec.yml in source code
After environment setting there were 2 other settings Cache and VPC. These were left as default as this was my first pipeline. All changes were saved and moved to step 4.
Step 4: In step 4, we need to configure our deploy settings. This is where your built code will be placed. Amazon ECS was chosen as development provider. Amazon ECS stands for Amazon Elastic Container Service, it is a highly scalable, high-performance container management service that supports Docker containers and allows us to easily run and scale containerized applications on AWS [4].
After selecting our provider, a set of expanded options will appear, we must choose a cluster name, search name and image filename. A cluster is a grouping of container instances and multiple cluster can be created [6]. Before giving a name to cluster name we had to create a cluster on amazon ECS. I choose the windows and networking template.
Figure 3: Selecting a cluster template in step 4
After choosing the Linux and Networking template I created my own cluster name “ecs-demo”. A cluster is a group of amazon EC2 virtual machines.
After choosing a cluster name I then created a service for service name and an image. A service is needed for scheduling when containers need to run. A service is created in amazon ECS. Before creating the service, a task definition and a container had to be created. When creating a container, a name, image, memory limit and ports were all created.
- Deployment provider: Amazon ECS
- Cluster name: ecs-demo
- Service name: nginx
- Image filename: nginxlatest.json
Step 5: In step 5, an AWS service role is created using an IAM. IAM is for identity and access management which helps control access to AWS resources. This allows us to access resources in our pipeline amazon account. Click on create role. The role name that was created was AWS-CodePipeline-Service.
Step 6: Step 6 was a review of all the settings chosen for a pipeline. After accepting the review of our pipeline being created. In a new page a success alert would appear at the top of the page, letting me know if it was successful.
Figure 4: showing pipeline was created successfully.
Unfortunately, this did not happen as I could not get past step 4.
6. Conclusion
A person or Software firm seeking to work with AWS code pipeline software need to understand continuous integration and continuous delivery as well as the pipeline itself before delving into AWS code pipeline.
I wanted to use this technology and this pipeline as I wanted a better understanding of both. AWS code pipeline was a lot harder than I thought. There was a lot more in aws code pipeline than I realized. I had to learn new software like amazon ECS and AWS CodeBuild all while trying to learn AWS pipeline. I mainly got stuck on step 4 when creating pipeline. I was stuck here as I had to create a cluster which I didn’t know how to create and got confused by amazon documentation as I thought the documentation was very messy and a bit too much information. I finally decided to create an empty cluster name which seemed to work.
Find Out How UKEssays.com Can Help You!
Our academic experts are ready and waiting to assist with any writing project you may have. From simple essay plans, through to full dissertations, you can guarantee we have a service perfectly matched to your needs.
View our academic writing services
After creating a cluster name, I then moved onto the service name. This is where I got really stuck and there was just too much documentation and information were scattered in many places on creating a service on Amazon ECS. I had to do a lot of things in Amazon ECS, I had to create a cluster and after I created a cluster I had to define a task definition and in the task definition I had to create a container for service name. I followed the documentation and done the 4 steps of creating a service but at the end I always got an error. Something wrong with container image “nginx”. I did not understand this error and could not fix this error.
Figure 5: Creation of a service error.
If I got step 4 complete I would have been able to complete my pipeline. If a pipeline was created, it would start the first build from GitHub repo using master branch which specified in step 2, the building of a pipeline. This first build would fail because our new IAM role that was created in step 5 does not have permissions to access the GitHub repo. To fix this, you need to go into IAM console dashboard and go to roles on the menu left hand side of the screen. In roles, go to the role you created which takes us to the summary page of the role. From there we need to go into attach policy and check “AmazonEC2ContainerRegisteryPowerUser” to give access to our IAM. Once this is attached, we are brought back to the IAM dashboard. If the new IAM role has a power user is under policy name, then it now has privileges to access the repo. After the privileges are set, we could then work on the AWS code pipeline.
Looking back now, I wish I done a few things differently. I should have done more research on my topic to see what was involved etc, which would have prepared me better. I should have looked at other technologies too. Instead I just choose AWS code pipeline, as I wanted to learn about AWS services and wanted to learn more about creating and using a pipeline. AWS code pipeline was a lot bigger than what I had first thought. I had no idea AWS code Pipeline would use AWS CodeBuild, Amazon ECS and Docker containers. All this new software except for docker had to be learned too which threw me off the main focus of this research paper which was continuous integration and continuous delivery with AWS code pipeline.
https://aws.amazon.com/blogs/compute/set-up-a-continuous-delivery-pipeline-for-containers-using-aws-codepipeline-and-amazon-ecs/
References
[1] Fowler, M. (2018). Continuous integration | ThoughtWorks. [online] Thoughtworks.com.Available at:https://www.thoughtworks.com/continuous-integration [Accessed 7 Sep. 2018].
[2] Sacolick, I. (2018). What is CI/CD? Continuous integration and continuous delivery explained. [online] InfoWorld. Available at: https://www.infoworld.com/article/3271126/ci-cd/what-is-cicd-continuous-integration-and-continuous-delivery-explained.html[Accessed 8 Sep. 2018]
[3] Zhou, A. (2018). The Principles of Continuous Integration and How It Maintains Clean Code and Increases Efficiency. [online] Forbes.com. Available at: https://www.forbes.com/sites/forbesproductgroup/2018/01/09/the-principles-of-continuous-integration-how-it-maintains-clean-code-and-increases-efficiency/#782d8f3c1920[Accessed 8 Sep. 2018].
[4] Amazon Web Services, Inc. (2018). Amazon ECS – run containerized applications in production. [online] Available at: https://aws.amazon.com/ecs/[Accessed 1 Oct. 2018].
[5] Fowler, M. (2018). Continuous Integration. [online] martinfowler.com. Available at: https://www.martinfowler.com/articles/continuousIntegration.html [Accessed 8 Sep. 2018].
[6] Docs.aws.amazon.com. (2018). Amazon ECS Clusters – Amazon Elastic Container Service. [online] Available at: https://docs.aws.amazon.com/AmazonECS/latest/developerguide/ECS_clusters.html[Accessed 1 Oct. 2018].
[7] Amazon Web Services, Inc. (2018). Amazon ECS – run containerized applications in production. [online] Available at: https://aws.amazon.com/ecs/ [Accessed 1 Oct. 2018].
Cite This Work
To export a reference to this article please select a referencing style below: