Friday, December 30, 2011

ISDM - a Private Cloud Computing Model

Providing faster solution to any problem is a challenge and if you need to spend efforts on redundant tasks or rely on in-effective half cooked work-flows then instead of savings have to spend time in execute those work-flows. A Cloud for any of these tasks can be a best option as can share Infrastructure and can use on Demand but setting a Costly Cloud and managing it is not easy but now IBM provided a Cost Effective Optimal Solution named ISDM.
IBM Service Delivery Manager is a prepackaged and self-contained software appliance that is implemented in a virtual data center environment. It enables the data center to accelerate the creation of service platforms for a wide spectrum of workload types with a high degree of integration, flexibility, & resource optimization.
Use IBM Service Delivery Manager if you want to get started with a private cloud computing model. The product allows you to rapidly implement a complete software solution for service management automation in a virtual data center environment, which in turn can help your organization move towards a more dynamic infrastructure.
IBM Service Delivery Manager is a single solution that provides all the necessary software components to implement cloud computing. Cloud computing is a services acquisition and delivery model for IT resources, which can help improve business performance and control the costs of delivering IT resources to an organization. As a cloud computing quick start, IBM Service Delivery Manager allows organizations to benefit from this delivery model in a defined portion of their data center or for a specific internal project. Potential benefits include:
  • Reduction in operational and capital expenditures
  • Enhanced productivity - the ability to innovate more with fewer resources
  • Decreased time-to-market for business features that increase competitiveness
  • Standardized and consolidated IT services that drive improved resource utilization
IBM Service Delivery Manager provides preinstalled capabilities essential to a cloud model, including:
  • A self-service portal interface for reservation of computer, storage, and networking resources, including virtualized resources
  • Automated provisioning and de-provisioning of resources
  • Prepackaged automation templates and workflows for most common resource types, such as VMware virtual images and LPARs
-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

IBM Smart Cloud Enterprise - IT Infrastructure as a Service

IBM SmartCloud Enterprise is an agile cloud computing infrastructure as a service (IaaS) designed to provide rapid access to security-rich, enterprise-class virtual server environments, well suited for development and test activities and other dynamic workloads. Ideal for both IT and application development teams, the IBM SmartCloud delivers cloud-based services, systems and software to meet the needs of your business.

IBM SmartCloud Enterprise offers an expansive set of enterprise-class services and software tuned to the needs of enterprises — both mid-size and large. On its own or as an integral part of other applications and solutions, IBM SmartCloud Enterprise provides you the ability to approach the cloud with confidence.

IBM SmartCloud Enterprise can help you address many important workloads and challenges that your organization faces like Increase speed and responsiveness in development and test environments while reducing costs. Utilize across a broad spectrum of batch processing workloads including risk analysis, compliance management, data mining projects.Web Site Hosting Efficiently deliver marketing campaign websites faster and with fewer resources

For a detailed view and Presentations please refer IBM Smart Cloud
-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions


Saturday, December 17, 2011

Automated Regression-How You Justify

Regression Testing is nothing but looking for bugs in code that used to work in past versions. It is not an easy task and so considered to be time consuming and need to be executed with each build. So if we address this task will allow to find bugs in code as fast as possible with minimal effort (automated!). This becomes more important the longer your product has been in production to keep customers happy. Bugs will happen and now way we can stop them we just want them to either be minor ones or issues limited in new code - not the old stuff that people rely on to get their job done till date.
It also acts to reduce the "drudge" work of manual testing and frustrated QA and Customer Support Teams. That work is also subject to human memory - as even if all the test cases written somewhere - are they all up to date? Are we sure? What happens if "human memory" on Leave :-)
Similarly automated regression tests also act to codify and formalize one's experience so,You don't lose the entirety of knowledge in case some one from the team move on as happens in this Dynamic Industry. So if we handle this will free up all resources at different levels which can do something "real productive". It also helps your team be "more proactive and less reactive". The more team spends fighting fires the harder it is to have a truly enjoyable work place. Not sure if you can enjoy this but I don't as it Stress out.

I keep detailed steps and possible available tools like from IBM Rational and various Automation Manager from IBM Tivoli and IBM Web Sphere with various work-flows in upcoming blogs in January.


Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions


Friday, December 2, 2011

Automation-How it will help Optimize Resources

Complexity on SDLC is increasing daily and so is kind of issues and handling them optimally and spending time where it makes sense depending on your role and it's viewpoint.
For example when I'm handling, things like use resources and infrastructure optimally and enable Software to do redundant things and allow teams to spend their time in enhancing capabilities than doing these tasks. As it requires to spend time in automating areas and so it takes the back seat as you need to perform daily tasks. Even though every one aware of the positive impact of this and being an architect I'd rather minimize or remove manual efforts - a long-term gain to team productivity but at the cost of product in the short-term. Now when I see this from developer's perspective I'd rather be focused on developing and handling the cool new product than bug fixing or modifying current tests.
However good automated regression testing is something that "no way" we can ignore and when I say testing it means starting from Code Check-in till release Cycle no manual intervention should require. Not even for configurations of machines or test assets. It's usually not a huge investment of time but the payoff is large and grows over time - like a savings account. From each point of view
  • Developer --> Less Time bug fixing Enjoy your Family Life as well
  • Manager --> Better quality product, risks identified earlier. Happy Customers
  • Architect --> Spend time in new Designs with minimal product risk.
  • QA --> Less manual drudge work and satisfied management and improved quality.
 In next Blogs I will discuss why and How of Automating Regression Cycle.


Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions


Friday, November 18, 2011

Automation and Cloud Offering with IBM TSAM and IBM WebSphere CloudBurst

The Tivoli Service Automation Manager(TSAM) and WebSphere CloudBurst Appliance are private cloud management offerings that enable the cohesive approach necessary when building a cloud environment. It is important to note that the two are not competing products, but instead are complementary in nature. Primarily they differ on their degree of specialization:
  • WebSphere CloudBurst focuses on addressing WebSphere workloads.
  • Tivoli Service Automation Manager provides standard management capabilities to a broad range of workloads.
You must consider the needs of your private cloud environment when deciding on whether WebSphere CloudBurst or Tivoli Service Automation Manager is appropriate for your cloud needs. It doesn't have to be an either/or decision — there are indeed usage scenarios where you will benefit from an integrated solution of both Tivoli Service Automation Manager and WebSphere CloudBurst.

The Tivoli Service Automation Manager provides you with the capability to request, deploy, manage, and monitor cloud services from a single management interface. Regardless of which type of cloud service or software components constitute the service, you can use Tivoli Service Automation Manager to standardize and automate the delivery of the environment to your cloud. Once delivered, Tivoli Service Automation Manager builds on existing IT infrastructure to provide insight into the full life cycle of the cloud-based service.

The WebSphere CloudBurst Appliance is a cloud management device, purpose-built for WebSphere application environments. It builds on special virtual images, such as the WebSphere Application Sever Hypervisor Edition, and allows users to create patterns that represent their target application environment. These patterns encapsulate the application infrastructure nodes and configuration necessary for the environment; you can use WebSphere CloudBurst to deploy them into your private cloud. Once deployed, WebSphere CloudBurst provides management and monitoring capabilities that give you necessary controls over your running application environments.

-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Thursday, September 29, 2011

InfoSphere DataStage and TeraData Sync Tables

Teradata sync tables are created by DataStage Teradata stages. Teradata Enterprise creates with a name terasync that is shared by all Teradata Enterprise jobs that are loading into the same database. The name of the sync table created by the Teradata Connector is supplied by the user, and that table can either be shared by other Teradata Connector jobs (with each job using a unique Sync ID key into that table) or each Teradata Connector job can have its own sync table.
These sync tables are a necessity due to requirements imposed by Teradata's parallel bulk load and export interfaces. These interfaces require a certain amount of synchronization at the start and end of a load or export and at every checkpoint in between. The interface requires a sequence of method calls to be done in lock step. After each player process has called the first method in the sequence, they cannot proceed to call the next method until all player processes have finished calling the first method. So the sync table is used as a means of communication between the player processes.

In Teradata Enterprise, you cannot avoid using the terasync table. In the Teradata Connector, you can avoid using the sync table by setting the Parallel synchronization property to No, however that stage will be forced to run sequentially in that case. 


-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Big Data - Changing approach towards BI and Analytics

Data...More Data and then n-times multiplication of this data. Data is growing much faster across enterprises irrespective of country or city you are in. Its growth is explosive. Digital universe expanded so is the data within organization. But all of this data is not relevant. We don't really work on this zattabyte of data for the time being  still need to deal with huge data unprecedented data growth from a wide variety of sources and systems.

In recent days a new term "big data" has emerged which describe this growth and also provided the systems and technology required to leverage it. Generally speaking, big data represents data sets that can no longer be easily managed or analyzed with traditional or common data management tools, methods and infrastructures. Big Data contains certain characteristics like high velocity, high volume and even variety of data structures. This definitely brings new challenges to data analysis, search, data integration, information discovery and exploration, reporting and system maintenance.
 

Here is a very nice article from Shawn Rogers which discussed various issues and available systems around this "Big Data" including Hadoop. It also discussed how it is impacting BI and Analytic.

-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Requirements Gathering - How Cirtical it is for a Project?

Requirements gathering is 'THE CRITICAL' foundational activity for building the data model(s) for a business intelligence environment. I do agree not all requirements can be captured and documented when we start but still a detailed requirements gathering approach is likely to yield information to help in developing the initial design. Off-course it will help to extend if additional needs are identified. It also help to map the "real requirements" and explore them instead of getting deviated with what is provided. 'Real World' is driven by perception and we have to live in this real world. 
When we want to capture the requirements we should prepare ourselves for the same. It includes understanding the Scope, business issues, collect relevant documents and 'Who & How' going to provide information.
Project Scope - a Document describe what’s need to be delivered and what can be ignored. It also mention minimal project timeline and the tentative available resources. Off-course it contains major issues and assumptions to document risks.

Business Issues - Data analyst should have in-depth understanding of information gathered and its mapping to corporate strategies. Analyst should understand what are problems which need to be addressed.

Relevant Documents - Analyst should collect samples of decision support reports and spreadsheets as a starting point or set the baseline. This can be used to identifies deficiencies in process and format. Also will provide information on available data which can be used to meet the business needs.

Who and How - Any information need to be solicited from variety of people in roles including the sponsors, steering committee members, business SME(s), business analysts and end users. Apart from these people involved in providing decision support information or people familiar with possible data sources. Post identification of these people discuss with each of the individuals or groups to collect the information in a manner they are more comfortable. Information gathering is a technique and it requires to be tailored to each participant. Need to be well planned sessions with crisp queries to avoid missing the opportunity in time allotted. But need to be flexible enough to handle situation and get answer in different format. Discussions should be documented and if any follow-up commitments should be completed.

-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Sunday, September 18, 2011

Cloud Computing how it is growing and its impacts

New research indicates that Enterprises are started adopting Cloud Computing and upward trends expected to continue in near future. Here is a detailed discssion on various areas where cloud computing being used extensively and discusses levels of maturity, trends and best practices in organizations’ use of business data in the cloud.
Please visit Ventana Research for details.
 
-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

What does it take to Justify Data Modelling in Organization?

Data Management is the key aspect and major challenge of any organization. Professionals involved in managing data understand the negative impact of poor data architecture practices. Data Model is not a piece of design looks nice on the wall as we treat it without considering its practical implications. This situation leads to confusion, misinterpretation or assumptions, ultimately wasting time and effort and directly impacting the bottom line.
A detailed discussion by Jason Tiret on "what motivates your upper management will make data modeling justification much easier".
R-E-S-P-E-C-T for Data Modeling 
 
-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Wednesday, June 29, 2011

What is Data Transformation - Core of ETL

Every Business Need to transform its data based on various reporting and Audit requirements. Business can't use data directly from Source and so is used ETL tools like InfoSphere DataStage. Transformation operations change one set of data values into other values when the mapping specifications are generated and run as jobs. Data transformation is a process allows you select source data through some application method, convert that data, and map the data to the format that is required by target systems. Developers manipulate this data to bring it into compliance with business, domain, and integrity rules as well as with other data within the target environment. In simple terms can say transforming data with mapping specifications.

A transformation rule is nothing but instructions provided to developer who converts them into a job defined based on the mapping specification. The transformation rules describe the current state of the information and what needs to be done to it to produce a particular result. A business analyst might add the business rules and collaborate with the developers to turn the business rules into a job. Data Types can be transformed into business standards and apply consistent representations to data, correct misspellings, and incorporate business or industry standards. 

Transformation rules can be chosen from set of rules imported to transform the source data and produce a result that the end business application might need.The result of a transformation must be a value whose type fits the type of the target object. A transformation can include the following items:
  • Converting from one data type to another, resolving inconsistencies,Converting currencies for monetary calculations, Reducing redundant or duplicated data.
A transformation can be in the form of functions, join operations, lookup statements, expressions, or annotated business rules. Transformations can be on a single column or on multiple columns. Tools like IBM InfoSphere DataStage and InfoSphere Information Server makes life easier for developers to provide canvas to map the specifications and design Jobs. InfoSphere Fast Track further speed up the process of mapping.
 
-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Thursday, June 16, 2011

A century of IBM: Technology pioneer continues to ‘Think’

Happy Birthday IBM.
From June 16, 1911 where companies that made scales, punch-clocks for work and other machines merged to form the Computing Tabulating Recording Co to be renamed as IBM in 1924 and to be our own Watson in 2010. IBM came long way and hence we always simplify IBM by saying "Think" - "Click for IBM 1st".

IBM started way back making sense of millions of punch card records and sees future innovations in the analysis of the billions and billions of bits of data being transmitted in the 21st century. So those Inventions by IBM during its early days are key to future Generations and Trends. Data from multiple sources, Integration of Data, real-time data processing and Business Analytic all have base in those Initial Inventions.

As Watson getting used in real-world to use as a medical diagnostic tool that can understand plain language and analyze mountains of information, we can understand where IBM Thinking and Technology going. Based on this we can say IBM and other Technology companies focusing on "Data" and its Analytic. As Data or say Huge Data in raw form need to be processed. And its analysis going to generate many new businesses in future.

-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions





 

Wednesday, June 15, 2011

From ETL to T-ETL - Its Advantages

In June, 2010 I started my blogging with ETL & ELT. Today I am taking it to next level and see T-ETL. What it is and how it can benefit the Enterprises.
Federated Approach use traditional method of data consolidation. Consolidated data stores, which are typically managed to extract, transform, load (ETL) or replicate data, are becoming standard choice for information integration today. In today's world ETL Tools and in certain cases Data Stores and Streaming data together are becoming best way to achieve fast, highly available, and integrated access to related information. By combining data consolidation with federation, businesses achieve the flexibility and responsiveness that is required in today's fast paced environment.

What we achieve if we integrate InfoSphere DataStage and InfoSphere Federation Server to perform data consolidation. On its integration InfoSphere Federation Server can be used as data pre-processor or performing initial transformations on the Data either on Source or on Data Extraction Piece. It means we are introducing Transformation before real ETL and is named as T-ETL. The T-ETL architecture can use federation to join, aggregate, and filter data before it enters InfoSphere DataStage, which can use its parallel engine to perform more complex transformations and the maintenance of the target.
The architecture draws on the strengths of both products, producing a flexible and highly efficient solution for data consolidation; WebSphere Federation Server for its joining and SQL processing capabilities, and WebSphere DataStage for its parallel data flow and powerful transformation logic. The WebSphere Federation Server cost-based optimizer also allows the T- ETL architecture to dynamically react to changes in data volumes and patterns, without the need to modify the job.

Transformation followed by ETL (T-ETL) is not a new concept and is as old as ETL and ELT. Many ETL jobs already employ some form of transformation while extracting the data, say filtering and aggregating data, or performing a join between two source tables, which reside on the same source database.  Only restriction that the source objects must exist on the same data source has severely limited the scope of T-ETL solutions to date. InfoSphere Federation Server removes this limitation and extends this initial transformation stage to heterogeneous data sources that are supported by InfoSphere Federation Server.
-Ritesh
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions

Why Use InfoSphere DataStage for ETL Processes

IBM InfoSphere Information Server is a suite or can say an umbrella of multiple integrated solutions with features of Profiling, Cleansing, Extraction, Transformation, De-Identification and Loading. InfoSphere DataStage is an ETL tool and part of the IBM Information Platforms Solutions suite. It uses a graphical notation to construct data integration solutions and make life easier for the ETL process developer.
IBM InfoSphere DataStage provides a series of benefits like

Flexible Development Environment to ETL Process Developers. With its Feature Rich Designer developer can develop their processes in their desired manner and can even plan components which can be reused. Feature Like multiple instance of single process allows to share and remove redundancy of processes across enterprise. ETL developer can perform the data integrations process quickly and even can mane use of extensible objects and functions apart from implementing customized functions and use them.

With InfoSphere DataStage ETL developer can not only retrieve data from heterogeneous applications but also can join data at source level or at DataStage level and apply any business transformation rule from within a designer without having to write any procedural code. 

With the introduction of Information Server, common data infrastructure used for data movement and data quality (metadata repository, parallel processing framework, development environment) and provide a complete Data Lineage.Off course all this along with capability of executing the ETL process in parallel mode with unlimited scalability and maximum utilization of hardware resources.

-Ritesh
 Disclaimer: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.

IBM's Centennial and Community experience by ISL Hyderabad

It is a proud moment for IBMers as they are celebrating 100 Yrs of IBM's vision. IBMers world wide doing their bit to help the local community and go beyond their regular hectic Job Routines. It is true that in the Age of the Modern Corporation Community Service is must and every one need to push the boundaries of science and technology, along with the responsibility towards IBM and make their communities successful.

Today Technical Community @Software Laboratories Hyderabad celebrating the freedom to work and decided to take Community Experience. Technology Experts experiencing a new challenge today while  working with or I should say mentoring young Kids of various Schools in Hyderabad including few in Interiors. It is a 'real time' Challenge and completely different experience for these Experts where they share their ideas with Children. It will be real fun to see how these experts convenience and explain things to our Next Generation Kids who consider technology from different perspective. From my perspective It is real learning exercise for these experts than really to the young kids. As kids are real innovators in all fields. They knew how to handle problems with multiple tries and without taking tension.
Let me come back and share the experience and also share more as I am sure it is going to be real fun.
Disclaimer: The postings on this site are my own and don't necessarily represent IBM's positions, 
strategies or opinions 
 

Wednesday, April 13, 2011

Data Masking in Integration Space


Data Privacy, Data encryption or data security all points to securing the information. All this to avoid any unauthorized access to the confidential information say SSN, Passport Number or Driving Licensee and above all Credit Card. Data Theft if common and all processes in place within enterprises to safeguard data. All this helps to meet compliance and avoid law suits and distortion of brand image. As enterprises are adopting policies to control risk of data theft in all the areas and application usage, Information Server also needs to match it to benefit the customer demand.

Recently with Information Server 8.5, IBM announced launch of new InfoSphere DataStage Pack for data masking. This pack allows user to mask sensitive data that must be included for analysis, in research, or for the development of new software. Using this pack, you can comply with company and government standards for data privacy, including the Sarbanes-Oxley (SOX) Act (and its equivalents around the world).
Masking Pack has a variety of predefined masking policies to mask different types of data. These predefined data masking policies can be used to mask information in context-aware data types and Generic data types.
Some of the key features of the IBM InfoSphere DataStage Pack for Data Masking are:
  • Consistently mask an identifier in all data sources across the enterprise.
  • Mask individual records, while maintaining analytical integrity.
  • Mask data values with fictional but valid values for data types or business element types, while maintaining application integrity.
  • Mask data repeatedly, while maintaining the referential integrity.
  • Create masked test databases. 
For detailed information and its usage please do refer DataStage Masking Pack

-Ritesh
 Disclaimer: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinion

Monday, March 28, 2011

Sunday, February 13, 2011

My Take on the Day of Love

Well I usually blog on Technical Idea and usage but not sure why and how I decided to blog on the Day of Love. I am unusual person to take on this but let me try. People say Change is Life and I say Keep pace with Changing Life Cycle (no typo against Style). So I am trying same.
All knows February means Romance, Commitments, promises and Affection. I am also exposed to this fact.
Different intentions, Different ideas but same name 'V Day. It is all about expressions and how different people express it differently.  But I would like to say It is really complicated. Isn't it?
"Love has a tendency of appearing when you least expect it". If you might have watched movie "Nine" it is about keeping relationship in the center and live around it.I am sure many will agree it can happen even in the stickiest of  situations. Go Watch Princess and Frog :-).But on a serious note, Learning to forgive is what should be the 1st and last lesson in Love. All it teaches is "honesty". It is about commitment.
Is "Love" enough in the midst of constant despair? Can life goes with it? Isn't it really we are pretending to be happy and celebrating keeping reality behind? So where is the honesty here? If we can't share our happiness together honestly, how you can think will stick together in hard times?
So commit, promise and be honest on this day instead of pretending because my other friend is doing so. He is doing same thinking same :-).

"No intention to preach or lecture :-). Enjoy the day with friends who cares about you". They are the ones who are "real" and it has to be 2 way path. One way leads into no where.Love really needs efforts to make it work. Not sure how many of you really do that but wish you all Luck.

Disclaimer: It is really Personal View. You can read and disagree or ignore and disagree :-). But do enjoy with honesty.

Thursday, February 3, 2011

Debugging is an Art - BTB Series 1

As part of Back To Basics, I decided to take on Debugging which is most commonly used and S/W Engineers spending time WW. It is at various levels and know as various names and many even hate this but can't live without it.
Weather we are using IBM AIX System or we have applications like WebSphere or any InfoSphere applications and even DB2 or Oracle. We have to do the debugging when we use them.
So here is my take.

Developing vs maintenance are two different approaches. In the product Life Cycle it is the maintenance which last for long, require deep skills and techniques to proceed. As all aware it is process of isolating and correcting the cause which test care uncovers. Or say process which results in removal of error.
Debugging is not simply looking at the problem and test case and identify the solution. "It is an ART". Engineers need to be confronted with the situation with "symptomatic" indication of the problem. i.e. External manifestation of error and the real internal cause without having any obvious relationship with one another.
So let me define saying an Art or poorly understood mental process which connects symptom to a cause is called "Debugging".
Another confusion people have is it is testing but it is not but always occur as a consequence of testing.
Debugging always have 2 outcomes
  1. cause found and corrected
  2. cause not found and have to follow suspect, proceed approach with more tests. Keep validate the suspicion until problem found in iterative manner
Problem or in field term named as "bug" provides clue with its behavior.
  1. Symptom may be geographically remote and even not appear part of program but impact a site that is far removed. Highly coupled components exacerbate this situation.
  2. Symptom disappear temporarily as a effect of another error correction
  3. Can be impact of non errors as termed round-off inaccuracies
  4. Human error and most difficult to be traced
  5. Result of timing problem rather than processing issue
  6. Difficult to accurately reproduce the input conditions as in real situation input ordering is indeterminate
  7. Issue is intermittent
  8. impact of parallel processing as tasks distributed across and running on various processors

I will discuss various Strategies and Tactics for debugging in extended part 2 of this series

-Ritesh
Disclaimer: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.

Tuesday, February 1, 2011

Back to Basics - Are we really forgetting?

I had few discussions with few people in January and was surprised. People are slowly forgetting S/W Engineering concepts. Or what we learn in college. Companies like IBM building new products day in, day out and people are using it. But seems as a community or professional we forgot few basic things which we need to spread as they come handy when every day.
So here comes back to basics series. In this series I will highlight issues like Debugging why it is an art. What does it take to make a product successful. Why to follow a specific model? What are new tools which can help us in daily working environment or using all these big applications what other tools we need to have.
How maintaince can be easier if we follow few simple rules i.e. "Back to Basics - BTB Series".
So watch this space for initial set in few days.

-Ritesh
 Disclaimer: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.

Tuesday, January 18, 2011

Is new System becoming Legacy System too Fast?

We do have days of discussions spread across months. Then we come up with so called "Next Generation" design. Boast around developing a futuristic system which going to solve all our problems. Customer going to get huge ROI and so on.... Does it really happen? Or is it otherwise?
I happen to have discussion and explorations with one customer in late 2005 and after multiple discussions and  trend analysis and their design and implementation phase customer goes "Live" with new system in mid 2009. And here comes January 2011 and seems same futuristic system is a Legacy System including its data. Now customer is considering another new system which address their problems spread across multiple countries and have a consolidated system. It is 'Good for Consulting' but as being part of product development forced me to think don't we have optimal and faster ways?
How to shorten our "long development Cycles" and provide customers real futuristic systems before it is  another legacy system? This is forcing me to think and explore and find new ways to speed up these implementations. Allow customers to use benefits of system before it becomes legacy. It saves their energy and investment can be used to do more with same data. Only way forward is come up with few tools which can speed up this process.

Everyone says Agile methodology works well. Not sure How will it help our customers in the long run if is followed during product usage? How and what are possibilities will think and share soon.

-Ritesh

 Disclaimer: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions.

Tuesday, January 11, 2011

Tooling around Migration

Migration is always a complex things and enterprise hate this. 1st thing is "Why?" It is additional cost, more time, resources, planning and so on. Impact stick to older versions causing cost to various product teams. Maintenance cost  of product is always very high and if customers hesitant to migrate cost doubles as need to have features in older releases.
Now companies including IBM focusing on how to help customers migrate. How to enable them to use new versions and provide more features. This open the scope for more advanced tooling around Migrations. Especially in the area of ETL. With ETL, ELT and other modules picking up and get upgrades either on databases or in operating systems or even product related patches or even maintenance release. All this require some testing before customer goes "Live". Currently this process is less than semi automated and time consuming. So lots of delay in between upgrades.

It looks like IBM and all other vendors in the ETL tooling area taking this as a challenge and addressing them with various tools around their products. Though it is beginning but is an area where more focus in coming years is going to be customer's demand. Migration is not one time effort but on-going and also data spread across spread across operating systems and databases and other formats. So a consistent focus is required.
 So tooling around Migration is the key to future in my opinion.


-Ritesh

 Disclaimer: "The postings on this site are my own and don't necessarily represent IBM's positions, strategies or opinions."