Integration a decade on – what has changed

I started working in the integration area in 2007 and have worked on more than a dozen integration projects. I’m now venerable enough to have seen what was once best practice become old hat, or even (the worst pejorative of any transformation project) “legacy”. So, what has and what hasn’t changed?

That was the way it was

When I started my integration career Webservices were in full swing and SOA was close to the peak of the hype cycle. The future was to be business logic exposed as re-usable services, using open standards such as WSDL and XML. Open standards were everything, and the standards bodies W3C and OASIS (Organisation for the Advancement of Standards in Information Systems – if I remember rightly) were publishing an ever growing canon of WS-* specifications. The common ones were of course WS-WSDL, and WS-Security; but then came WS-Eventing, WS-ReliableMessaging, WS-Trust, WS-Policy, WS-Notification, WS-PolicyAssertions, WS-Discovery, WS-Addressing, WS-Topics, WS-Enumeration (no I’m not making these up) and a load of others. WSDL was (as with many things designed by committee) hardly simple or elegant. Others on the list were (at least in my experience) never used, and may were probably never implemented by vendors anyway.

Every architecture diagram had a large prominent Enterprise Service Bus (ESB) in the middle which everything had to flow through. This decoupled systems with services (“producers”), from systems which used services (“consumers”). Core IT teams would build fine grained “technical services” using the ESB (connecting to back-end systems); and business analysts (non-IT) could wire those together into coarse grained “business services” using yet more open standards – such as BPMN and BPEL (or WS-BPEL to fit the pattern). The idea was that, just like a CMS, the business could decide to change a process themselves and in just a couple of clicks (with no IT involvement). They could update their logic and push straight to production. New mortgage applications, customer complaints or supply chain orders would merely follow the new path. What production controls might reside around the business altering their core business processes and pushing into production was never really explained – but as I said: “peak hype cycle”.

Once all this was up and running, consumers could “discover” the service they wanted and “bind to” (use) it. This was done through UDDI (which I don’t think was ever given a WS prefix). Developers and operators could find all the information about a service in one of several vendor specific “Registry and Repository” products - these never got an open standard, or much use. The R&R contained code, documentation, and run-time metrics and allowed operators to track who used a service, and enabled them to be stopped from using it (either throttled or permanently) – although this relied on people using binding at run-time (which rarely happened) – and for the throttling or metrics on them binding for every call (which never happened). Where UDDI was used, it was mostly used to setup a connection, or by developers in first using services.

Cynicism aside – breaking open monolithic systems (many of them COTS products), using open standards and decoupled re-usable services was a very good idea, and is still best practice (as I’ll look at in my next post on what has remained). Still, as the tone suggests a lot has shifted – and not just the fall of WS-*, so what has changed in the past exciting decade?

1: Tools of the trade

The first and most obvious change has been in tools and technology - as happens in IT any time you blink. In 2007 Web Services were king. There was some talk of REST it played 2nd fiddle – at least in the groups I talked to. The big industry players were full bought into WS-* standards; both for their middleware products (Oracle OSB, IBM Websphere ESB, TIBCO Integration Bus, etc); but also for exposing services from their packaged products (SAP, Siebel, Peoplesoft, etc). Since then SOAP, and XML for that matter, have gone distinctly out of vogue.

JSON is (at time of writing) the current messaging mark-up of choice; with YAML used in config docs but not in common use data transfer.

SOAP-Webservices have been supplanted with RESTful APIs. REST is a paradigm choice as well as a technology one (resource rather than function centric), but it does carry a large number of technical changes with it:

  • DNS and HATEOAS hyperlinks rather than UDDI for discoverability
  • API Gateway and Portal products not Registry and Repositories for documentation, metrics and control of who can use a service
  • WSDL has been replaced by other open documentation standards – key amongst them Swagger and RAML
  • As for the rest (no pun intended) of changes REST brought – this is mostly the fall of “additional tooling” sold by the big software vendors. Much of this can now be done by the standard web toolbox: standardised status codes; web caching (with cache controls and e-tags); security (Authentication/Authorisation); and good old Apache weblogs – all of which are understandable by people who have moved into Integration from other areas of development – or do Integration in addition to other development.
Another change is the shrinking between theory and practice. In the days of WebServices, WS-Security, UDDI, R&R, WS-* offered a lot of the same features as now exist in the API landscape, but I never saw it all used together. Perhaps I just wasn’t on the right project or perhaps this is just another sign of SOA not living up to its promise. Key management, metrics, OAuth, API Portals, caching and the like are all standard now for anyone implementing APIs – not just an idea which never got off the ground.

2: Growth of Open Source

The booming days of SOA were, coincidentally, also in the booming economy before the crash; and this must have been a good time to be in software sales. The big vendors all had a very complete stack as mentioned, but Open Source was already starting to enter the integration space. JBOSS had an ESB and a BPM suite, and Mule was starting to get talked of (although without their shiny drag and drop UI – and certainly before their pivot to an all-embracing adoption of all things API).

In the development and test arenas Open Source now dominates, and even production platforms have seen a proliferation of Open Source offerings such as Docker. It’s true that many enterprise editions sport a hefty price tag; and community editions often lack more advanced (one might even say essential) features; but in areas with a limited scope (such as queueing) there has been an expansion. Kafka has even shown that an Open Source technology can really compete and outshine established vendor products.

3: Open frontiers of the internet

In 2007 SOA was decidedly an internal game. The talk was about reuse between departments and making business a part of the SOA movement – not about allowing dangerous external people to use “our” services. Although that was the focus, was some thought that service security should be universal and consider that there was no difference between internal and external consumers. A few attempts were made to run open UDDI registries on the web; but as usual the promise and the reality differed wildly. Most often the view was, as so often the case even before SOA, that the edge of the data centre was where the barbarians were kept at bay, and inside was “safe”. No. Before you ask, I never subscribed to that naïve view, and I’m well aware of the stats about how many hacks come from insider threat; but many people didn’t. Where there were externally facing links they were usually business to business – with trusted 3rd parties who were treated in some middle ground between external and internal; not allowed access to the core network to be sure, but a service account or mutual HTTPS would surely be good enough, right?

Now we’re into a world which is “API-led”, or “API enabled” – organisations invest not only in making internal APIs but also making APIs which can be exposed (appropriately secured) to the public internet. One driver for this has been the rise of mobile apps. If you’ve got a mobile app then it’s essentially one of your own systems – but beyond your own frontier where you have little or no control. The great benefit of this is that once you’ve gone to the effort to create a truly external API with real security controls, why not try to leverage it?

Now the scale of reuse is more about ambition than ability. At the lesser-end an organisation may just use their APIs for apps, and possibly get a bit of re-use as their B2B connections can re-use them; whilst at the more experimental end you see mashups, hackathons, and an active developer community outside the organisation itself – all enabled by self-service signup. There is still probably more ambition than reality here (still a little bit of hype) – but the reality is much closer than it was in the SOA days.

4: Scale - Big integration at big scale

The first iPhone was, co-incidentally, also released in 2007 – and in the last decade the rise of the smart phone, mashups, the continued growth of the desktop internet and even the Internet of Things (IoT) have seen an explosion in the amount of traffic on the web. This has been enabled and driven by cheap, elastic compute power through Cloud and we’re into a world of big data and ever bigger integration. This has driven a couple of big shifts, both in integration but also in the wider architecture (in fact closing the gap between Integration and everything else).

The increased demand challenged the scalability of big monoliths, with big disks, running on big servers, exposing big services through a big ESB. This has drove a shift first to NoSQL databases; and then to microservices. Splitting up applications into small services each exposed through a network interface (usually an API). This makes these applications themselves part of the integration landscape, and increases the number of “integration” connections. Now it’s not just a handful of connections between system A and System B, but hundreds or thousands of connections between what would have once been modules/classes within the same system.

This is also starting to change the way we move data around. A microservice “owns its own data” – but in many cases, services need data from other services to operate. This can either be done by making an API call to get the data – or can mean keeping copies of data, lots of copies. Copies has dual benefits of decreased latency and improved resilience. Storage is cheap enough to accommodate, but this does mean synchronising a lot of data – adding load and complexity. This along with IoT, the desire to do real-time analytics, and a general rise of mass data on the move has spawned the concept of an Event Bus – such as Kafka - a distributed scalable extension of the existing queuing pattern, but with the added benefit of persistence going back before a subscription started – with a raft of implications far too large for this (already somewhat lengthy) post.

5: The fall of the ESB

This rise of Microservices – each with their own well-defined interface contract has called into question the logic of sending everything through the ESB. As have experience of the ESB being the cause of a lot of scalability and troubleshooting woes. When I was taught about the bus concept I was told that an ESB did three things “Routing, Transformation, Orchestration”. In a microservices architecture each microservice:

  • provides a well-defined contract – so no need for transformation
  • is a discoverable REST based service – so limited need for routing
  • is responsible for its own data, or can make calls to get the data it needs from other services – so does its own Orchestration
Now on the one hand this just means we’ve broken up the bus and moved some logic into the business services – but it also means we’ve made a decision that connection via the ESB is no longer a MUST but a MAY. In a 100% custom built microservices architecture there is no need for an ESB (although there will probably be a slimmed down API gateway in the middle). In most architectures where COTS packages exist there will probably still be an ESB/middleware layer – but only used when it’s needed, rather than because it’s an end in itself.

6: Agile developer lead culture

The final change is again part of the wider shift in the industry. Agile, smaller (often onshore) teams have gained traction, challenging if not replacing offshore waterfall deliveries. This developer culture has – along with a host of technical limitations – pushed aside the dream of Business Analysts directly changing the production processes. Rather than teaching Business Analysts IT, the reverse is happening by letting developers talk directly to the business – through product owners, with much the same result; a shorter path to production.

Conclusion

Whilst writing this I happened on a 2007 Gartner hype cycle, which said SOA was entering the Slope of Enlightenment. As I said in the introduction, I think it was still high up the Hill of Inflated Expectations, and subsequently it has crashed into the trough of disillusionment and emerged on the other side; albeit with a new name “API-led enterprise” or “API enabled Architecture”. Along the way the standards and technology have changed; and it’s lost a lot of the less workable bells and whistles. What has remained is a lot of the core ideas of what SOA was meant to be. The shift to cloud and rise of mobile apps have opened services up to a wider audience than ever before. Much has indeed changed, but a lot of the good has remained. But what remained is a story for my next post.

Disclaimer

My postings reflect my own views and do not necessarily represent the views of my employer.

Comments