In the first three parts of this blog series for integration platform as a service (iPaaS), I reviewed the classic integration requirements and outlined four new requirements that are driving the demand for cloud integration in the enterprise:

  1. Resiliency
  2. Fluidity in hybrid deployments
  3. Non-existent lifecycle management of the platform

In this post, I’ll review requirement #4: Future-proofing for the world of social, mobile, analytics, cloud, and internet of things (SMACT)

Even if your organization has been traditionally conservative and is currently only at the assessment state around the value of a cloud-centric IT infrastructure, and specifically software as a service (SaaS) applications, it is almost a certainty that your business stakeholders will soon expect your IT organization to handle social data or enable the mobile channels. And for those companies that sell products with hardware components, the so-called Internet of Things (IoT) will be a pressing need, if it is not already. When you consider the volumes, velocity and variety, not to mention the need to harvest, integrate and analyze data in the world of SMACT it can be daunting. Today many IT organizations are using massively parallel or big data technologies such as Hadoop and Amazon Redshift to build enterprise data hubs or data lakes. These technologies are ideal for writing sophisticated algorithms to analyze data for either problem detection or analyzing historical and predictive trends. Most times, this data is aggregated from a variety of sources – some business transaction data sources (such as Oracle databases or salesforce.com) and some activity data such as website click stream data. However, enthusiasm for transitioning to these big data technologies often makes can sometimes result in a lack of attention to the key prerequisite of getting that data into these systems in the first place, thereby amplifying the integrator’s dilemma that is crippling so many enterprise IT organizations today.

In order to be able to handle the new SMACT data and API requirements seamlessly, your integration platform as a service (iPaaS) needs to deliver scale without slowing your data initiatives down. Your iPaaS needs to deliver elastic scale that expands and contracts its compute capacity to handle variable workloads while pumping data into your analytics infrastructure. Social and IoT data can demonstrate significant variability. Your cloud integration platform needs to move data in a lightweight formats and add minimal overhead; JSON is regarded as that compact format of choice when compared with last generation formats such as XML (see my post on Why Buses Don’t Fly in the Cloud and Greg Benson’s post on JSON-centric iPaaS). And lastly, your iPaaS should be able to handle REST-based streaming APIs to continuously feed your data analytics platform. Without these new approaches to iPaaS, you are setting yourself up for falling short on your big data and SMACT initiatives.

With a modern iPaaS, you are set to reap several benefits. Here are a few:

  1. You do not need for two platforms – one EAI and one ETL, and can standardize on a single iPaaS for ALL your integration needs. As a result of this rationalization, you will contain software and hardware costs and need fewer skilled resources. Your developers and administrators can double up to build and manage ETL as well as EAI workloads.
  2. With its elastic scale, you will not have to plan and provision resources around black swan events of traffic spikes.
  3. And lastly, a more compact REST+JSON architecture will help you optimize hardware resources as well as seamlessly deliver on the growing mobile requirements of your business.

Last week we made the trek to Boulder to meet with the Boulder Business Intelligence Brain Trust (BBBT). With so many business intelligence thought leaders and practitioners living in the area, I must admit that I was very excited to finally visit Boulder. I was also excited to see BBBT founder Claudia Imhoff and connect with the broad range and reach of independent analysts who are now part of the BBBT network. Not familiar with the BBBT? The BBBT member list is a who’s who of business intelligence and data data integration. According to the their website:

“Since 2006, we’ve served the BI industry by organizing half-day vendor presentations for our members, who are independent analysts, consultants, and experts. It’s a reciprocal arrangement. Our members get briefed on current and planned tools and technologies, and the vendors get valuable feedback from us, such as ideas on where BI is going, advice on their marketing direction and message, and input on their offerings.”

The SnapLogic team focused on what we call the Integrator’s Dilemma. As we noted in our announcement about the BBBT session: ”As sure as death and taxes, application and data integration will converge in the cloud. This session will introduce the SnapLogic Integration Cloud, dig into the architecture and roadmap and provide an overview of the common cloud services and on-premises integration and orchestration use cases we’re seeing in the market.”

Here’s what Claudia Imhoff, who is the BBBT Founder, had to say about the meeting:

“In the past, data warehouse deployments consisted of an enterprise data warehouse that spawned a series of data marts, usually created by IT on-premises using structured data for operational systems. Today’s IT implementers face very different challenges stemming from new (big) data sources such as unstructured information and large volume sensor data, new display technologies such as mobile devices, and new deployment options such as cloud computing. Companies like SnapLogic help to ease the problems that inevitably arise from extending beyond the EDW architectures because they were developed during this transition – they “understand” this next generation of BI.”

You can listen to our podcast discussion here.

While there was a bit of an internet connection issue, there were some great tweets from the first hour of our session. I’ve embedded some of them from a Storify post below and will be sure to post the link to the recording when it’s available. Thanks to Claudia and Dave for a great session! We truly appreciated the opportunity to connect with the #BBBT.

Last week we hosted a very popular webcast with Dr. Stefan Ried, Forrester’s Principal Analyst and Vice President serving CIOs. You can watch the recording here. Stefan provided a detailed overview of Forrester’s research on cloud-based integration, the trends in the market and he outlined why hybrid is the IT reality today and for the foreseeable future. The webcast also featured a high-level overview of the

What to Look for in a Cloud Integration Platform

SnapLogic Integration Cloud and an interactive Q&A on a wide variety of topics. The Q&A got into why the legacy enterprise service bus (ESB) is not well suited for cloud integration scenarios, the benefits of multi-tenancy and the challenges legacy integration technologies face when dealing with the modern integration requirements.

Before the discussion, Stefan summarized SnapLogic’s position in the market by saying:

“As you have seen, SnapLogic can really be both integration in the cloud and integration with the cloud. If you want the Snaplex on-premise you have on-premise data governance and you can point to the SaaS applications you want to connect to. But if you have the majority of apps in the cloud, you want the Snaplex in the cloud. Such architecture is very flexible to keep both options open.”

Here is the transcript of the Q&A:

Question: You mentioned ESBs typically aren’t multi-tenant and aren’t well suited for the cloud. Are there specific reasons why an ESB is not the right choice for cloud integration?

Answer: Couple of reasons.

  1. The scalability of an ESB. ESBs can become very big, as you might have seen in your own infrastructure if you run a large ESB, but ESBs struggle to become very small. It sounds funny, but it’s not. For example, if you use your cloud-based integration scenario to do B2B integration. So you can easily not only integrate your Salesforce.com system to your ERP system, you can also integrate all your channel partners of your Salesforce.com tenant into your ERP system. You can make a very elegant channel integration like this. If you would spin up for each of those guys an ESB and it’s used for only a couple of messages per day, you’d waste a lot infrastructure (and license by the way). These are examples that hit the limits of traditional ESB architectures.
  2. The paradigm of developing complex integration scenarios. These are helpful if you have very complex requirements, but if you want to synchronize your NetSuite customer data with your SAP customer data, or whatever ERP you have on-prem, this is tool overkill. I’ve seen customers that are trying to use a traditional ESB for those cloud scenarios end up with much higher costs hence skills that they need to buy even externally and really enjoy cloud based integration as an alternative.

Question: Why can’t I simply do this with my existing middleware and what’s my recommendation for using my existing middleware with some of these new scenarios?

Answer: First of all, I’d like to motivate and encourage everybody to try out the new solutions. Not only because this is a conversation sponsored by SnapLogic, but really because both the traditional application integration as well as the traditional data integration are simply too heavyweight in many cases for the requirements of synchronizing data with the cloud. That means I’d like to encourage you to try it out, to find out which use cases of your enterprise fit into that and then make the balance between using your old middleware and extending to those use cases or learning something new and licensing something new. Many mid and large enterprises end up using both. So cloud based integration is not a replacement, it complements it. It provides those cases where the traditional ESB or traditional data integration tools are simply overkill.

Question:  Can you expand on your concept of metadata collaboration and why cloud integration platforms are better suited for that versus on-prem?

Answer: If you run the metadata in the cloud, obviously you can simply use cloud collaboration. So for example, you can put up a market place. Or you can put up crowdsourcing scenarios where you can simply share the metadata whether it’s a professional integrator or vendor created or the metadata that one of your peers created, maybe somebody in a totally different company with the same type of configuration or the same two endpoints. That means you can imagine dealing with metadata more like dealing with a Google spreadsheet, for example. You can easily share with other people in real time. The old way of dealing with metadata in traditional middleware was more the kind of traditional Microsoft Office, where you send around an Excel spreadsheet. When it arrives it’s outdated already.

That’s the difference in the cloud. It’s more collaborative. It’s more shared. You can imagine a marketplace and crowdsourcing, the sharing of metadata in a totally different way. And that ultimately cuts down implementation costs and makes skills much cheaper because people enjoy the simplicity and simply take the examples from other people instead of reading large manuals or buying a consultant.

Question: Does mulitenancy still matter? What are the reasons it’s important in a cloud integration platform?

Answer: Multitenancy definitely brings a significant cost cut. Technically you can achieve many of the things we’ve discussed today by spinning up a dedicated virtual machine on Amazon or elsewhere and you can technical achieve the same. But obviously if you’ve spun up a virtual machine in your own environment, you cannot do the things we just discussed on metadata sharing.  But if you have one shared environment, the environment can define that we share metadata or that we pluck metadata into an Appstore – we collaborate on metadata, but we keep the data itself secret and private. That is the actual form that Forrester has started to call, “Collaborative Tenancy.” That means being private on those parts that need to be private, like your data flows, but being very collaborative on those parts that can be collaborative, like the metadata. These concepts obviously are significantly different than what you could achieve in a single tenant environment.

Finally, I mentioned the example already that you have much more scalability. In a multitenant environment, if a tenant is not needed it falls asleep. It doesn’t require any infrastructure anymore. That ends up both infrastructure and potentially also license pricing that are transaction based.  In the old world, look at your ESBs, they’re all CPU-based pricing. If they’re lying around unused, they’re still expensive. This is all significantly different.

Question: We had debate recently about the heritage of an integration vendor. How do traditional vendors change? Can they change? What’s important under the hood?

Answer: You cannot change the basic architecture software. You cannot. If it’s based on a traditional Java environment and built with a traditional object model, you cannot turn it into a JSON representation. It’s simply not that easy. However, people try to map it. At the end you can obviously serve apps that are written in both paradigms in both ways, but you will not get the performance. That means if you have heavy traffic, the new environments in the cloud, so for example if you are a customer-facing application, an old GS for example or an app to deal with your customers that is also connected to Facebook and has unpredictable traffic volumes, I think the new middleware architectures like the one from SnapLogic definitely have some advantages. At the end, it gets down to cost. It gets down to the volume of physical infrastructure that you need to serve the traffic and if you’re with a modern architecture you’re definitely built to deal with the traffic.

Be sure to watch the entire webcast here and check our events page for upcoming web and live programs. You may also find this post interesting – why SOA was DOA thanks to the ESB.

TechValidate SnapLogic survey resultsThis week we published the results of a survey we ran in March with TechValidate, which asked about the barriers to software-as-a-service (SaaS) adoption and the business and technical drivers for cloud-based integration services. We’ll be reviewing the details of the research in a webinar on April 25th, which will also provide a detailed overview of the SnapLogic Integration Cloud.

Here are some of the key findings from the research:

  • 56% of survey respondents are running four or more SaaS applications.
  • 43% prioritized application and data integration challenges as a barrier to SaaS application adoption in their companies.
  • 59% of survey respondents listed speed or time to value as the primary business driver for a cloud integration service.
  • 52% said a modern and scalable architecture was the primary technical requirement of a iPaaS.

When asked about the challenges of legacy integration tools for cloud integration (see our my colleagues post on Why Integration Heritage Matters and his summary of Why the Enterprise Service Bus Doesn’t Fly in the Cloud), 43% took issue with the requirement for costly hardware purchases and software installation and configuration. 37% found on-premise integration tools to be too expensive due to the perpetual licensing model and 35% noted that change management is painful where end point changes mean integration re-work.

Integration Platform as a Service

As I noted in the press release, the results of this TechValidate survey are in line with the conversations we’re having with our customers, partners and prospects. As SaaS application, analytics and API adoption grows in the enterprise, the ability to connect with other systems is the essential ingredient to long-term customer success. Integration should be a cloud accelerator not a bottleneck, which is why increasingly companies of all sizes are looking for modern, elastic integration alternatives to power their cloud services initiatives.

I hope you can join us for the webinar next week. You can also download the complete survey results here.

We are pleased to announce that we will be performing an update this week to the SnapLogic Integration Cloud April 2014 Release.


UI Enhancements

The following enhancements have been made to improve usability.

  • The Pipeline Run Log and Run Pipeline icons in Designer were updated for clarity.
  • The Save icon in info boxes and dialogs now save the pipeline instead of just applying changes.
  • Pipeline tabs resize to make it easier to access all open pipelines.
  • The Snaplex Health Wall in Dashboard was updated with new icons to more clearer indicate the status of your Snaplexes.
    dashboard-april2014

Manager

  • Pipeline Tasks now support pipeline parameters. When you create a Task for a pipeline, you can now configure pipeline parameters if they are configured for a pipeline.
    task-pipeline-params

Features

  • OAuth2 Support was added for REST
  • Conversion between date/time and epoch is now supported with the addition of Date Getter methods to the expression language.
  • Replicate Database Schema
    Support has been added to replicate a database tables in Redshift by the passing the schema in a second output view of the database Select Snap and sending it to the second input view of a Redshift Insert or Bulk Load Snap. See  the Replicate a Database Schema in Redshift use case for more information.

Snaps

New Snaps will be available with this release to integrate with:

  • Active Directory
  • Vertica
  • LDAP
  • OpenAir

See the Release Notes for the lists of Snaps that were updated.

“Putting a JSON format on a traditional, XML-based ESB is like making a silk purse out of a sow’s ear.”

- Loraine Lawson’s analogy in reference to the article: Why Buses Don’t Fly in the Cloud: Thoughts on ESBs

I recently wrote about  why the legacy enterprise service bus (ESB) won’t fly in the cloud. Loraine Lawson at IT BusinessEdge reviewed the article and asked the question: Does Integration’s Heritage Matter in the Cloud? At SnapLogic we believe so strongly that heritage matters that we rebuilt our elastic integration platform from the ground up to be fast, multi-point and modern. Here’s why we believe that the heritage of integration products matters:

  1. Because the Innovator’s Dilemma creates major hurdles for legacy integration vendors  to venture completely into this new area of social, mobile, analytics, cloud and (Internet of) Things, which we’re calling SMACT.
  2. Attempts to build on their past successes results in experiments and half-baked solutions.

You’d be surprised how many on-premise integration product managers in Silicon Valley spend more time worrying about earnings per share (EPS) threats than the future of the ESB!

So let’s look at the two reasons why integration heritage matters.

The Innovator’s Dilemma Challenge
Clayton Christensen’s words echo throughout the boardrooms of Silicon Valley today as we face so many technology innovations and disruptions to traditional markets. It is extremely difficult to give up on the gravy train that is the perpetual licensing model of software maintenance. Transitioning to a subscription pricing model, let alone the cultural changes that the software as a service (SaaS) demands, is no simple option. Even if the company’s executives are willing to make this transition, it’s the shareholders that would be very unhappy going from 30-40% operating margins down to single digits. If you were a fly on the wall in the executive boardroom of a company with on-premise heritage trying to enter the cloud market, “cannibalization” will be the most commonly heard word. And even if the board and executives get it, good luck telling the legacy product teams that their baby is no longer beautiful and the sales team that you’re going to introduce a cloud service that will no longer require the on-premise boat anchor up-front price tag.

Half Baked “Hybrid” Integration Solutions
The other reason why on-premise software companies struggle to escape from their heritage is because most meaningful technological innovations cannot be applied as easily as the proverbial “lipstick on the pig”; unless the new offering is completely redesigned and developed from scratch to the latest market requirements and specifications. Not many successful companies have the appetite to do a complete rewrite for the reasons mentioned in the above “Innovator’s Dilemma” section. To draw an analogy, it is like an internal combustion engine-based car manufacturing company making cosmetic changes to their gas combustion-based car and expecting to compete with a state-of-the-art car like Tesla in the electric car market. Nissan had to build its Leaf car from scratch to cater to the electric car market, by building a completely new transmission, a new engine and a new power supply.

Coming back to the specifics of the integration market and why vendor heritage matters, here are some technical reasons why:

  1. Resiliency in the context of integration is the ability of  integration flows to handle changes and variations in data structures without breaking down. Most legacy integration products are strongly-typed or tightly-coupled. In other words, that means the platform needs to know the exact specifications of data that it needs to process when executing the flows. Unfortunately, the SMACT world is not as compliant as we would like. Changes in schemas and data structures are commonplace. Addition of columns to database tables, or a partner accidentally adding additional data fields in a document that gets sent to you should not bring your integrations, and thereby your business, to its knees. Resiliency or a weakly-typed/loosely-coupled paradigm is not something that can be introduced into a product as an afterthought. Introducing resiliency is as involved a process as replacing the transmission of the car that is necessary to move from an IC engine to an electric car. The platform has to be architected on such modern principles from the design phase. Hence, integration heritage does matter.

  2. Legacy integration products which came from the extract, transform and load (ETL) roots were optimized for relational data use cases such as moving large volumes of data from relational data sources into relational data warehouses. These products were built to read rows and columns, operate on them and write rows and columns. These products struggle today when it comes to handling hierarchical data. Similarly, the enterprise application integration (EAI) tools were built for message-oriented integrations that can handle hierarchical data but are optimized for real-time integrations to handle one message at a time in as efficiently as possible. Shedding this heritage to handle broader use cases is no small feat. It’s like changing your car’s engine to be battery powered. Anyone who has had engine trouble know that mechanics recommend buying a brand new car rather than replacing it!

  3. Lastly, integration products with an on-premise heritage are built with the on-premise mindset. Configurations and product libraries are laid out locally on every physical server. These local assets need manual attention when it comes to product upgrades and patch fixes. Managing these local files, especially in a highly distributed environment, turns into a nightmare very fast. This is another of those heritage inheritances that cannot be wished away without a complete product redesign. Think of this lifecycle management of the heritage platform as an oil change that you frequently have to do with your IC engine. Like most people, you as the IC engine car owner needs to take time off your busy schedule and take the car to the shop for minor and major oil changes. Teslas need no oil changes and all product maintenance is software defined. All upgrades are downloaded automatically to the car over mobile network and customers experience no downtime.

In summary, heritage is more of a disadvantage in the rapidly shifting sands of technology innovation. Technology paradigm shifts are still of large magnitude and often demand a new approach and redesign of products and technologies. In this article, we drew an analogy between integration platforms with heritage and IC engines, and between modern integration platforms and electric cars such as Teslas. Of course, one can always rightly argue that at the end of day, both cars will get you to your destination. But, as they say, it’s not the destination but the quality of the journey that makes the destination worth it. And with a modern integration platform as a service (iPaaS), your journey is speedier, more cost-effective and with fewer forced downtimes, making it truly enjoyable.

Next Steps:

  • Read Greg Benson’s posts about the SnapLogic architecture and platform services.
  • Check out some of our resources to learn more about the SnapLogic Integration Cloud.

 

In my last post I reviewed the classic integration requirements and outlined four new requirements that are driving the demand for integration platform as a service (iPaaS) in the enterprise:

  1. Resiliency
  2. Fluidity in hybrid deployments
  3. Non-existent lifecycle management of the platform
  4. Future-proofing for the world of social, mobile, analytics, cloud, and internet of things (SMACT)

In this post, I’ll review requirement #3: Non-existent lifecycle management of the platform.

With increasingly hybrid deployments (as discussed in iPaaS requirements post #2), lifecycle management can very quickly become a nightmare for users of legacy ESB and ETL integration technologies. Upgrading on-premises integration software, such as the core product libraries, typically means binary updates for every installation across hybrid environments. While each vendor is different, I’m always surprised to realize how many cloud integration installations are simply hosted on-premise software and not true multitenant SaaS. This means the vendor has to upgrade each customer and maintain multiple versions. Nevertheless, the more challenging upgrades are on-premise installations that are customer managed. It’s always amazing to find out how many enterprise customers are running old, unsupported versions of integration software due to the fear of upgrades and the unscalable mindset of “if it ain’t broke, don’t fix it!” Cumbersome manual upgrades of on-premise integration installations are error-prone and result in significant testing cycles and downtime. The bigger the implementation, the bigger the upgrade challenge – and connector libraries can be equally painful. Lastly, local configuration changes and the need to rebuild mappings (see my point on the myth of “map once” here) also demand thorough testing cycles.

SaaS customers are accustomed to interacting with complex business processes (such as opportunity-to-order management in a CRM application) through a simple web interface. Hence, the bar set for the modern integration platforms is quite a bit higher where these customers expect the vendor to shield customers from as much complexity as possible. There is a similar expectation with the management of the lifecycle of the iPaaS.

The new set of requirements around lifecycle are:

  1. Customers want zero desktop installations, period. Customers want to move away from integrated development environments (IDEs) that are extremely developer-centric and require their own upgrades from time to time. Customers want browser-based designers for building integrations where they can avail themselves the latest, greatest functionality automatically.

  2. Customers expect the installation of the runtime engine be self-upgrading as well. This is particularly important for the on-premise installations to avoid the cumbersome, error-prone tasks and endless testing cycles. Today’s iPaaS runtime engines should be smart enough to push binary upgrades to every runtime engine, regardless of its location – on-premise of the cloud. This is particularly efficient with a software-defined integration architecture because each of the runtime engines (we call our data plane the Snaplex) are stateless containers awaiting execution instructions from the control plane.

  3. Customers expect the execution instructions to also include connector libraries and configuration information, which means that customers no longer needs to worry about manual configuration changes at every installation location.

A truly modern iPaaS solution will deliver on all of the above and deliver an integration service that eliminates much of the pain of traditional lifecycle management. The cost and risks of not having a self-upgrading software is an order of magnitude higher in today’s age of agile software delivery (note that SnapLogic delivers new software innovation on a monthly cadence – check out our March update here). There are great benefits in this approach; for one, customers are always on the latest platform and automatically keep up with the innovation that vendors deliver. And two, they no longer have to plan long and costly upgrade cycles that are always associated with infrastructure downtime and hinder business continuity. But the biggest benefit is that your integration platform is built to run at cloud speed!

In my next and final post of this series, I’ll discuss the importance of choosing an iPaaS that future-proofs your integration investments to tackle challenges posed by the new world of SMACT (Social, Mobile, Analytics, Cloud, and internet of Things).