Thursday, February 13, 2014

DevOps - Automation and Continuous Testing is the Key

While leading the organization to adapt to continuous delivery and DevOps faced various challenges  where problem of many or 'automated' ---But'... or lengthy and sequential execution cycles with various reasons, not counting build size itself or network and geographical constraints. Test artifacts need to be copied across systems, components for different teams using varied test assets and if have to execute testes on frequently say hourly builds to identify failure immediately, quite challenging task. This holds the key to success though for both continuous integration and continuous delivery. Developer should get feedback on whether their code is "ok", using a quick personal build and test run immediately so developer can resolve it than working on something else (new function/logic), each line written impacts the system. On success, can run a complete set of tests to integrate the tested changes or integrate the changes and run the full set of tests based on time required. In any case quick turnaround is required and not possible to achieve it manually.

Now every one does automation, what is so special? Key is complete automation, with no 'But' or almost, system performing every task on its own and is various tools including of IBM hold the key here, providing seamless automation capabilities. It is for us to deliver the integrated Testing mechanism which not only perform tasks automatically but also reduce time required for testing even with increase in number of tests required to be executed.

 We tend to run everything but should be executed in the manner, run sanity test before full suite for individual components to ensure Build, Infrastructure, Configuration and other assets are working fine to minimal level of acceptance. Prioritize frequently failed test to be executed post Sanity to confirm current build is good for formal testing. It can lead to 'always working' test bucket with very little chance of failure, covering majority of bucket. With this we will have only a subset of test remain which either do not provide any major benefit or real time consuming and not required to run on every instance, reduce their execution frequency. DevOps also improve the process where we can  build and test the components separately and then perform integration testing enabling frequent testing of individual feature or component.

Concurrency or parallelism is another step where we can automate our Deployment of builds following DevOps model and automated provisioning/configuration enabling us to run tests across systems in parallel, providing results much faster. Following automated provisioning enable to use Template based approach, taking away need for install time and can snapshot areas where no new pieces need to be installed like databases / source systems.

 Stub(s) holds the key rather than for every small change we connect to external systems. This saves lots of time and tools like IBM Green Hat are great simulation tools. Consolidate Infrastructure using Cloud Based Deployment (Softlayer) to use local virtual systems where other test assets resided to remove network contentions and improve productivity.

 Will cover more of testing aspects and other DevOps related content including uDeploy, Chef, Softlayer in coming series.

Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Tuesday, February 11, 2014

DevOps - Key to Continuous Delivery

While exploring Jenkins and continuous intergration pipeline thought of sharing on continuous development. Gone are the days of waterfall model where companies spend years to build products and clients further spend time in consuming them before moving to production. In the era of continuous change and need for something new every day, companies are moving towards quick release cycles. It started with mobile applications where new features gets delivered on regular basis on top of existing framework and can be seamlessly upgraded as well. With frequent release cycle requirements, an established mechanism need to be in place to avoid stress on resources and quality standards. It leads to a concept guided by open communication and  collaboration between software engineering and IT operation Experts.

DevOps is a model which enable enterprise to continuously deliver software to the market and seize  market opportunities by responding immediately to customer feedback or requirement with all quality checks in place. Following agile principles within software delivery life-cycle, DevOps enable organizations to achieve quicker time to value and provide scope for more innovation leading to easier maintenance cycles. Goal of DevOps model is move towards automation without need to enter anything manually and automation can be triggered by non-operation resources may be by system itself . It enable developers having more control on environment and can focus on research. DevOps leads to predictability, efficiency and maintainability of operational processes due to automation.

In the market different tools available to complete DevOps model including full set of DevOps offering from IBM which helps to achieve the process optimization and continuous development. IBM also offer DevOps as Service from Development to Deployment where developers can collaborate with others to develop, track, plan and deploy software. It is completely Cloud based and community based development via IBM BlueMix where can create applications with efficiency and deploy across domains. From planning to development and testing to release and deployment, all at 1 place following series of automation pieces to achieve machine based product delivery.

Will share more on steps and how to create Apps on BlueMix and following DevOps model along with Jenkins Series in upcoming blogs.

Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Thursday, January 23, 2014

Jenkins Series 2 - How to Create my First Job in Jenkins

Before share how to use Jenkins to build complex pipelines and create trends or build, sharing how to create your 1st Job in it.Click on the New Job. It will open a New Window with different Options.

Create a New Job

In the New Window you will see screen like below add the name of Job you want to create.
Provide the Name of the Job
Once Name is given, as a next step choose Add a Build Step. This is the place where you define what steps your Jenkins Job should perform. You can pick Windows or Unix Shell based on where you want to execute your command or scripts. In my example I have taken Windows. There are various other options available which you can play around once comfortable with Jenkins and you really need them.

Choose the Shell Want for Build Step

Once Shell is available add command or your batch file, shell script or any other steps want to perform. I have given only ls -l command and saved it. This is the place which really gets executed within the Slave (we discuss in other blog) a kind of compute node where process intended to be executed. In this case as you do not have anything registered other than master, it will run on the same machine where Jenkins is deployed and will provide you listing of workspace directory. It is a folder where Jenkins by default perform activities. You can use custom workspace as well (we discuss in other blog).

Once you click on Save will leads to following Screen and provide you Build Now Options
Click on Build Now to execute above step
On sucessfull execution you will see the results.
 Can click on the Console Output to see details of the Logs. Since we haven;t selected specific framework or selected reporting you will see console log by default for the steps executed above as it opens a Shell or Command Prompt and execute the steps.

Console Output

Hope now you have created your 1st Job on the Jenkins and ready for an interesting journey. Will discuss in coming blogs on how can register slave from the command line, what infect slave is, how we can consolidate results and run different kind of frameworks. So Stay Tuned.

Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Wednesday, January 22, 2014

Jenkins Series 1 - Create Trends for Existing Results - Using JUnit Publisher

After taking a long break on Non Information Server blogging thought of sharing my recent exposure to Dev Ops Model. 
Today I will share my experience how we can use Jenkins as a Dashboard to show trends of nightly build tests. You do not require to use Jenkins for executing these tests. Can simply use XML report files generated from your nightly build verification system  and produce these trends live instead of relying on Spread Sheets and their tabs.

Here are few steps need to use Jenkins to produce Trends on Dashboard when have test cases executed are based on JUnit framework or results produced are XML Reports adhering to JUnit Framework.

Step 1: Copy Report produced by your Build Verification Tests or Unit Tests into the Slave or machine where you want to run the process. You can even use Shared Folder for the same

Step 2: For Windows use Execute Windows Shell and for Unix use Shell

Step 3: With-in Windows or Unix see Image Below to copy the report file into workspace. Can use Custom workspace if do not want to copy these files. Can also use to write some script which can executed based on your build number can be supplied as Parameter. Below step can be used to copy any other files as well not required for reporting.
Copy XML Report Files to Workspace
Step 4: Use Publish JUnit Test Report to show these results on Jenkins dashboard for selected build. Jenkins plugin will parse the files and produce results on the dashboard.
Publish JUnit Test Result Reports to Jenkins Dashboard
Step 5: Save the Jenkins job

Step 6: Execute the Build with Build Now
Jenkins will process these XML files and produce a Trend Graph. Since it is only for 1 Build should get failure and success details. Repeat the process for different builds and can see trends like this below.
Chart showing Trends for different build on Dashboard

Now above trend you see shows red graph going upwards means have failure across builds reducing the confidence in the build. You can write the job once and can be triggered at different intervals or by external trigger. Can navigate the graph and see failures component wide. Will cover these details in Next Blogs on how to identify failures and see if any regressions or new failures and Fixed from previous build.

Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Friday, December 30, 2011

ISDM - a Private Cloud Computing Model

Providing faster solution to any problem is a challenge and if you need to spend efforts on redundant tasks or rely on in-effective half cooked work-flows then instead of savings have to spend time in execute those work-flows. A Cloud for any of these tasks can be a best option as can share Infrastructure and can use on Demand but setting a Costly Cloud and managing it is not easy but now IBM provided a Cost Effective Optimal Solution named ISDM.
IBM Service Delivery Manager is a prepackaged and self-contained software appliance that is implemented in a virtual data center environment. It enables the data center to accelerate the creation of service platforms for a wide spectrum of workload types with a high degree of integration, flexibility, & resource optimization.
Use IBM Service Delivery Manager if you want to get started with a private cloud computing model. The product allows you to rapidly implement a complete software solution for service management automation in a virtual data center environment, which in turn can help your organization move towards a more dynamic infrastructure.
IBM Service Delivery Manager is a single solution that provides all the necessary software components to implement cloud computing. Cloud computing is a services acquisition and delivery model for IT resources, which can help improve business performance and control the costs of delivering IT resources to an organization. As a cloud computing quick start, IBM Service Delivery Manager allows organizations to benefit from this delivery model in a defined portion of their data center or for a specific internal project. Potential benefits include:
  • Reduction in operational and capital expenditures
  • Enhanced productivity - the ability to innovate more with fewer resources
  • Decreased time-to-market for business features that increase competitiveness
  • Standardized and consolidated IT services that drive improved resource utilization
IBM Service Delivery Manager provides preinstalled capabilities essential to a cloud model, including:
  • A self-service portal interface for reservation of computer, storage, and networking resources, including virtualized resources
  • Automated provisioning and de-provisioning of resources
  • Prepackaged automation templates and workflows for most common resource types, such as VMware virtual images and LPARs
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

IBM Smart Cloud Enterprise - IT Infrastructure as a Service

IBM SmartCloud Enterprise is an agile cloud computing infrastructure as a service (IaaS) designed to provide rapid access to security-rich, enterprise-class virtual server environments, well suited for development and test activities and other dynamic workloads. Ideal for both IT and application development teams, the IBM SmartCloud delivers cloud-based services, systems and software to meet the needs of your business.

IBM SmartCloud Enterprise offers an expansive set of enterprise-class services and software tuned to the needs of enterprises — both mid-size and large. On its own or as an integral part of other applications and solutions, IBM SmartCloud Enterprise provides you the ability to approach the cloud with confidence.

IBM SmartCloud Enterprise can help you address many important workloads and challenges that your organization faces like Increase speed and responsiveness in development and test environments while reducing costs. Utilize across a broad spectrum of batch processing workloads including risk analysis, compliance management, data mining projects.Web Site Hosting Efficiently deliver marketing campaign websites faster and with fewer resources

For a detailed view and Presentations please refer IBM Smart Cloud
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Saturday, December 17, 2011

Automated Regression-How You Justify

Regression Testing is nothing but looking for bugs in code that used to work in past versions. It is not an easy task and so considered to be time consuming and need to be executed with each build. So if we address this task will allow to find bugs in code as fast as possible with minimal effort (automated!). This becomes more important the longer your product has been in production to keep customers happy. Bugs will happen and now way we can stop them we just want them to either be minor ones or issues limited in new code - not the old stuff that people rely on to get their job done till date.
It also acts to reduce the "drudge" work of manual testing and frustrated QA and Customer Support Teams. That work is also subject to human memory - as even if all the test cases written somewhere - are they all up to date? Are we sure? What happens if "human memory" on Leave :-)
Similarly automated regression tests also act to codify and formalize one's experience so,You don't lose the entirety of knowledge in case some one from the team move on as happens in this Dynamic Industry. So if we handle this will free up all resources at different levels which can do something "real productive". It also helps your team be "more proactive and less reactive". The more team spends fighting fires the harder it is to have a truly enjoyable work place. Not sure if you can enjoy this but I don't as it Stress out.

I keep detailed steps and possible available tools like from IBM Rational and various Automation Manager from IBM Tivoli and IBM Web Sphere with various work-flows in upcoming blogs in January.

Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions