Industry Standards – Thoughts from Management World Americas 2010

 

How TM Forum specifications can become as effective as industry leading standards

I’ve been involved in industry forums and standard bodies for many years. I’ve had experience in creating industry standards and getting them adopted. I’ve seen many successes and failures of these standardization efforts. It’s never easy to get agreement and even more difficult to get the right standards created and adopted.

Having just come back from Management World Americas in Orlando, the largest OSS/BSS (Operations Support Systems / Business Support systems) event in the industry, it’s gotten me thinking about the standards and frameworks set by TM Forum, and how they might be improved.

For those of you who are unfamiliar, TM Forum, or TMF, is a communications industry forum with over 750 members worldwide that include most of the communications industry ecosystem: service providers, equipment providers, software vendors, and system integrators. Amdocs has been a long time key contributor in the forum and are deeply involved in much of the programs of the TMF with active contribution and numerous contributors. I’ve personally been involved in TM Forum for some time, having attended and presented at the Management World for the past several years, and am personally involved in TM Forum’s leadership of activities, including committee and industry discussions—currently I’m serving on the TMF Technical Committee and TM Forum Advisory Council.

How TMF helps shape the industry

TMF has historically focused on helping service providers realize leaner operations by “nudging” the industry to normalize and optimize internal operations as much as is feasible. The outcome of adopting TMF endorsed practices theoretically leads service providers to be more effective companies. The forum has created several frameworks that form the cornerstone taxonomy of our OSS/BSS landscape – these frameworks describe what business processes are involved in service providers operations (eTOM), what data models they use (SID), and what sort of applications are needed to implement them (TAM). While there are some interfaces that were created or adopted over time by the forum, such as OSS/J, MTOSI, and IPDR (which I was personally deeply involved in creating), most of the work, and success in adoption of the forum’s work, has been focused on frameworks. The frameworks used to be collectively called NGOSS (for New Generation OSS), now they are all collectively referred to as Frameworx.

The case for interface standards

Though TMF’s framework standards have been widely adopted, its interface standards typically have not. There are many reasons for this, which I discuss often within the forum’s committees and activities. I believe that until we generate relevant interface standards, this trend won’t change. What I mean by an interface standard is one that if two separate systems each implement their side of an interface, integration between them will be almost “plug-and-play”. I say “almost” because OSS/BSS integration is so complex that true effortless plug-and-play is unlikely—but it can certainly be much less costly, risky, or time consuming than usual for our industry, where system integration is notoriously long and complex.

That said, it’s not enough to create standard specifications for interfaces, nor is it enough to even have working code to implement them or test them. The timing needs to be “just right”. Like with innovation, one must “surf the wave”. These standardized interfaces need to be created just in time for when they will be needed – not years too soon nor months too late. If the wave is missed or pre-empted the standardized interface will never get used and, possibly, years of work to create them will have been in vain. Timing is critical, but not sufficient – there must be a business rationale for implementing them. And even if interfaces are created at just the right time, there needs to be sufficient business reason for the key actors (vendors and buyers) to implement them—otherwise, once again, the effort to create them is wasted.

My experience has led me to three key criteria that determine whether standards get adopted:

  1. Critical enablers – the best and most widely adopted standards are when they are critical enablers of the industry, not just a “nice-to-have”. Standards are usually not in the best interest of industry players, as each wants to create a unique solution that is differentiated from its competitors and bestows upon them some competitive advantage. However, there are many cases that without industry agreement, an industry does not get develop at all. Examples of this are many of the standards around us such as VHS, CD, DVD, BluRay, USB, HDMI, GSM, TCP/IP, HTTP, HTML, WiFi. In each of these cases, proprietary systems may have existed prior to the emergence of standards, but the market really couldn’t take off without them. There would have been no interoperability between vendors, which splits the market and keeps it from developing.
  2. Timing – if standards are developed years too early—or, these days, even months too late—that standard may never get adopted. A good-enough alternative might be found, whether from a competing standards forum or even a proprietary solution. The absence of a standard can even hold back industry progress until a single format “wins”, such as in the “format wars” of VHS vs. BetaMax, as well as HD-DVD vs. BluRay.
  3. Market force alignment – when different parties cannot agree on standardization, they resist adopting competitive specifications and go with proprietary specifications or even create competing standards. For example, these days, cloud services are growing rapidly, particularly Infrastructure-as-a-Service (IaaS). Amazon, Google, Microsoft, EMC, and many others are providing computing and storage resources in the cloud. While their business clients would benefit from standardization, perhaps allowing them to easily migrate IT operations between providers, it’s not necessarily in the best interest of the innovators leading this space. Each of them would like to “lock in” their clients to their services. So the leading IaaS providers are probably the least inclined to align on standards that would effectively reduce their competitive advantage. In this case, standardization might benefit the newer players to allow them to play. But if the market-leading players do not open up to such standards, this market will delay, perhaps indefinitely, adoption of these standards.

So while standardization is never easy to achieve, it has many benefits. For buyers, standards imply reduced risk, cost, and time associated with implementation – this is true for consumers as well as business buyers. For system providers, it implies reduced engineering costs – rather than customizing systems to conform with unique needs of each client, systems can be developed once and deployed many times – increasing reusability and hence profit margins. Overall, this reduces engineering costs. I am a big believer in standards. But without delivering the intended value, they can wind up as wasted efforts. Therefore, adoption is critical.

So when do standards work? When they’re specific.

There’s another critical distinction between standards: specificity. Standards that are not specific enough may create the opposite of the intended value, or even cause unintended damage. While high-level framework standards may make discourse and understanding between parties easier, enabling different parties share the same terminology and taxonomy, this is where the benefits end. They do not necessarily reduce the risk, cost, or time of implementation. This is because framework standards have a lot of flexibility in interpretation. Even if everybody conforms to framework standards, it doesn’t mean that two separate conforming systems will interoperate. The intended benefits of standardization—both to the buyers and to the suppliers—are simply left unrealized. Such standards can even cause harm as they create unmet expectations by the buyers that if they buy a system that conforms to the standard, it will bring the benefit of interoperability with other conforming systems—which is simply not the case.

Interfaces make the best types of standards because they can be very specific. If two sides of a particular interface conform to a standard, and the interface standard specific enough it might also be testable with a “pass/fail” criteria, it implies that the two systems would interface with each other at vastly reduced risk, cost, and time of implementation. Therefore, wherever possible, interface standards — typically between different vendors’ systems — are the most effective way to achieve the benefits of standardization.

Planning standards for the future

In the past few years, the focus of the TM Forum’s activities has shifted from helping streamline the internal operations of service providers to enabling their growth into new businesses by offering new services and business models. Dealing with the existing operations of service providers is much easier than planning for their future, and planning standards is going to become even more difficult. The future can take different paths, and initiatives are at higher risk – some activities may pan out while most will not. Furthermore, planning the timing and effectiveness of standards for future initiatives is especially tricky. All this makes the criteria I suggest above even more acutely needed by TMF to be successful as an industry forum—not only in foretelling the future, but in addressing it proactively, and even becoming a critical enabler of that future. Other forums are a great source of inspiration, including 3GPP, IETF, CEA, IEEE-SA, CableLabs, OASIS, and W3C. These forums have all been able, at times, to create the right specification at the right time for the right issues. TMF needs to consider adopting some of the best practices from these forums that have made them succeed. The TMF does not have the same kind of track record in creating interface specifications that reduce the cost, risk, and time of implementation. If it is to become a critical enabler of new businesses models and growth for service providers, it must learn quickly how to do so.

I’m hopeful that TMF will eventually begin creating, in earnest, interface standards, and drive these standards to industry adoption, while it adopts best practices from other successful industry forums. Specifications and even software implementations alone are meaningless unless the standards get adopted by suppliers and buying users of the systems.

I personally have, and will continue to, contribute within the forum to try to bring about these outcomes.

What are your thoughts about this subject? How can TM Forum continue to help shape the industry with standards that bring benefits to everyone in the industry?

8 thoughts on “Industry Standards – Thoughts from Management World Americas 2010”

  1. Tal:

    All of your points resonate with me as regards standards of the sort you enumerate. Also, I’d agree that many of the SDOs you name have been successful in gaining spectacular adoption of their standards.

    So, a question arises–is there a common thread? One that occurs to me is that virtually all of the standards you call out are about specific types of devices or specific services. Another is that many of these organizations openly “conspired” to drive a standard into a market. I think both of these aspects are key to understanding any successes that were had.

    That raises the question of what is the equivalent focus of the TM Forum? It certainly isn’t about devices or services per se, but rather about the enterprise as a mechanism and the management of its behavior. That places the TM Forum in a unique posture relative to the industry but at the same time erects barriers of its own making due to the subject at hand. There is a not-so-bright line between offering frameworks for operating the enterprise and prescribing how a given enterprise is organized and run. I think this is more at the heart of the soft science practiced in the TM Forum than any lack of enthusiasm for your formulation for success criteria.

    As long as the fundamental mission of the TM Forum is centered around the Lean Operator, I don’t see it as likely that the TM Forum will make a major shift in culture or direction.

    One can decide for themselves if this leaves enough value in what the TM Forum does to justify participating, but it seems to me that’s the decision to be made.

  2. Great insights and I agree that TMF should learn from the experience of other SDOs (although not all their standards are adopted as well).

    For many years, I’ve been involved in SDO that focus on interfaces and APIs for CSPs (3GPP, OMA, PAM, Wireless Village).

    My conclusions from this period are:

    1. I agree that developing frameworks is not enough (as Tal said) but my contention is that developing interfaces is also only the first step in promoting their adoption. Almost all the interfaces that I was involved with, like SIP, SIMPLE, OMA-Push-to-Talk, OMA-IM and OMA-Presence (and others), required actual interoperability testing and certification due to several reasons:

    a. Too many ‘optional’ parameters and procedures left in the specs
    b. Different interpretations of the specs leading to incompatible products
    c. Implementation bugs

    In order to reduce CSPs’ risk and TTM, and actually (almost) seamlessly integrate systems from different vendors, all systems need to pass rigorous testing (these should also be developed and managed by the SDO)and get an official certification. This certification assures buyers (CSPs) that the products comply with the standards and have been tested for interoperability.

    2. It is much easier to align vendors’ interests when systems are forced to interconnect. For example, standards (interfaces) between mobile devices and servers will generally get consensus as these are typically different vendors (device/back-end). This is true also for server-to-server interfaces in different domains (coming from different vendors).

    3. Usually, business interests win over standards quality. The BetaMax-VHS example Tal has mentioned is a good one. But this conflict leads also to loose standards, leaving many ‘optionals’ in the specs. The main reason for these ‘optionals’ is companies’ interest in leaving space for differentiating features. The problem is that ones there are too many ‘optionals’, interoperability tests become useless as the testing matrix gets too complicated.

    4. It’s important to get on board both the big players and the small ones in advance, otherwise the market will remain fragmented. It’s also important to make CSPs themselves involved and have them influence the specs from day one (or at least the requirements) to make sure the specs tackle their real problems.

  3. The lead time for creating standards is too long so it’s time to draw conclusions and change the approach.

    Instead of defining the entire standard (fields, records, communication protocol etc.), It’s simpler and easier agree upon a standard data dictionary for existing / new fields and associated rules and then leave it for the vendors to build a dynamic interface generator working with that data dictionary.

  4. I believe Ben makes a good point. If the ontology is defined, the underlying “meaning” of the data is shared by all. The value lies in the information represented by the data, rather than in the mechanism of exchange of the information.

    The examples given of standards which emerged and were successful (VHS, CD, DVD, etc.) are all examples of standards where a tipping-point was reached for the technology to become ubiquitous. In my view, these standards succeeded because there was a commercial (or political) imperative e.g. sale of devices/media, enabling of communication and trade channels. Standards to interface system A with system B are likely to succeed if there will be millions of such interface instances. If the number of interface instances isn’t likely to reach a tipping-point, the constraints imposed by standardisation of the interface may become a barrier. So, if we’re talking about the standards a handset device may use to connect to and use a network, then millions of devices == high success factor for the standardisation effort. If we’re talking about standards to link fraud detection system X with charging system Y, the relatively small number of instances of this interface == lower success factor for any standardisation effort.
    Moving towards a common ontology, however, can lower this barrier.

  5. Ben’s point is essentially the strategy that the Forum has been taking – define the underlying process and data models rather than the detailed interfaces. These should be the real standrads – how you experess this information inan interface is subject to a lot of implementaion decsions (e.g Java, web services etc etc). There are a very large number of interfaces that are needed inside and external to the average communication player ( around 48,000 if you standardized wvery pair-wise interface in the TAM!)To hand craft all of them would take far too long.

    So what we are doing is 2 fold- 1) simplifying the system,s architecture to reduce the number of standardization points by adopting a platform approach with the help of a number of large operators ( i.e more granular assemblies of systems)and working on Ecipse based tools that will machine-generate actual interface code, test kits and other documentation directly from the underlying models i.e a model driven approach. This has lots of advantages, not least of which is speed and productivity but also the ability to solve an age-old problem that you need standards that can be extended to cater for operator specific needs without destroying the core standardization. The beta work on this shows great promise. However, I understand Tal has does not agree with this approach. But then, those of us who know Tal well know that he sees himself as the grain of sand in the oyster and disagrees with most things!!!!

  6. Again, I’d like to thank all for commenting and making this an engaging debate. I realize this post was somewhat more controversial than others, so I expected some stiff opposition to the thoughts raised in the post.
    Keep’em coming!

    Individual responses:
    Steve, it’s true that when organizations share an interest in creating a standard, and so-called “conspired” to make it, it works well. Also, that the more specific an interface is – for specific device or specific service, the more successful it is. So the subject matter of TMF is, seemingly, more complex – it’s many systems that interface with each others along a plethora of business processes. You’re right that it’s a sort of “soft science”. The question is, what would make TMF more effective, and as a result, its members and the industry more effective? Frameworks or interfaces? You and I worked together on an interface we felt was rather important at the time, IPDR. We made it generic enough to carry any payload and be useful for a variety of interfaces between parties with a number of compatible formats and transports, including complete plug & play. It was easily customizable, yet completely testable and standard in the sense that I describe. Coudln’t we target a few key interfaces and do the same with them?

    Oded, I generally agree with all your points. Here are some comments to clarify, a bit: I do not argue that interoperability testing and certification of interfaces cannot help in gaining adoption. In fact, they do in many cases. Especially, if the buyers place weight on these. However, my main argument is that compatibility at the frameworks level doesn’t come close to assuring interoperability. So the focus should be on interfaces and any way to drive adoption and interoperability are welcome. Completely agree on factors that foster alignemnt. Again, I think that’s one factor that can help identify, a-priori, if a standard has good chanse to get reasonable endorsement and adoption. I am quite familiar with the problem of “optional” fields, that make, effectively, some standards that would otherwise be interoperable, really not so. This is also a problem inherent in relying on frameworks and having extensibility. So I think when one focuses on interface standards, it’s also important to create built-in mechanisms for interoperability, which we’ve done with IPDR, and are also done with some that have good semantics. Regarding your last point, which is also excellent – I believe that unless strong market forces are there to CREATE the standards, they do not have good chances of success, and the means you suggest would demonstrate strong industry support.

    Ben, Indeed the TMF standard for SID is just such a data dictionary. I’m not arguing that SID is not valuable and useful. I am arguing that certifying along SID is not. I think SID + eTOM + TAM are great starting points from which to create interfaces. But what needs to be the end goal should be the interfaces created that carry SID payload between TAM-defined applications along eTOM-defined business processes, for instance. But there are still too many variants of how to do this. Standardized interfaces must be developed and those need to be interoperable. That’s what would reduce the cost. Interface generation is, perhaps, a tool to slightly accelerate part of the specification creation, but it doesn’t fill in for all the great points that Oded mentions as key factors in creating successful standards, on top of the points I made.

    Dan, first read my response to Ben. But regarding the tipping point – I don’t quite buy it. A technology was developed, and in order for it to become a commercial success, it required “one type of something” – there was not much room for diversity. Therefore, a standard was a business imperative for most parties. Obviously, everybody wants the standard to be based on their proprietary way of doing things, but they realize that they must compromise in order for all (including themselves) to have a fighting chance of winning in the marketplace. That’s what I think is the key dominant driver of successful standards – that tension and business interest. The other point you are making is very interesting – and I’ll rephrase it a bit: is the sheer volume of instances of OSS/BSS interfaces sufficiently small that it doesn’t help in having a very specific standard?
    Well, when I look at a global scale, I think we’re talking about each interface being used, possibly, a few thousand times. If I count the engineering cost of hand-crafting and customizing that interface each time, there is still HUGE costs involved. If the number were more like a hand-full, I would agree with you. But since we’re still talking about thousands of systems that would benefit, there’s a difference.
    By the way, there actually might be many more – especially along B2B interfaces. For instance, application developers’ programmatic interfaces with platforms such as SDPs exposed by service providers. Cloud services, and many more. Supposing Google wouldn’t have a standard map interface, how would that work? Now in this case, it’s not an industry standard – which makes it necessary for anybody that wants to integrate with maps to select a proprietary solution. But the point is that the number of interfaces is much larger than we expect and the cost implied in each is significant.

    Keith, first, thanks for commenting. As I have responded above to Ben, there is certainly value in standardizing data models and frameworks. It’s certification of these that is the issue. Also, the end goal of these data models, business processes, and application models, which we have been, and continue to be, very active contributors to, are, in my mind, creation of interfaces. Why? Because those really create interoperability. All the foundations that we’ve invested in for years can pay off big time only in interfaces. As we all know, two systems that conform to the various frameworks, and even pass “certification tests”, would not necessarily interoperate. Had the interface been standardized, considering interoperability as a prime consideration, it would be completely different. In order to get a consistent set of multiple interfaces in a domain, all the frameworks are helpful – but not as the end goal. We cannot stop here, nor should we be emphasizing certification of them. We should run ahead to the end goal, which is true cost reduction by means of interoperable standardized interfaces, based on the frameworks.

    Now regarding the sheer number of interfaces – even if we were to create 48,000 high quality consistent interfaces implied by TAM, they would not be adopted. It’s quite obvious, of course – because behind the interfaces, there is functionality of the systems. All these systems exist, and would practically never adopt such a vast array of new interfaces. The engineering costs would be unimaginable. However, if we focus on 10 key interfaces between parties, preferably across business boundaries, then we can just make it happen. Probably knocking off one time-critical domain at a time. Not necessarily too slowly – possibly even fast. This would move faster if the right business conditions exist, as if they do not, the interfaces might never get adopted anyway. The great thing is that we have already invested and have such a foundation to work from in terms of data dictionary / semantics of the domain. But now the time has come to get down to business of interfaces rather than emphasize conformance to frameworks.

    I’m not claiming that the tooling you are suggesting will not help – but without the compelling business drivers regarding the parties implementing the various sides of interfaces leading to them complying with the interface standard due to their interests, it will all be useless. Existence of a specification, as machine readable and high quality as it is, is not the key determinant if the interface would get adopted (as described well by Oded in his comment).

    Regarding the ability to extend/customize the interface – I’m all for that, but there are methods to make even these extensions interoperable. Some interfaces that have so many optional fields become no longer interoperable. So that needs to be a key design goal of interfaces – interoperability even despite extensibility. Neither tooling nor frameworks make interfaces interoperable – it’s a clever interface foundational design + actual interfaces with sufficient specificity, which would.

    Finally, indeed we know each other well, and I have the deepest respect for you and your leadership of the TM Forum. As we have discussed one-on-one, and in internal forum meetings for years, I continue to believe that interfaces are the way to go – in order to truly realize the benefits intended by standards. All we have done so far are great foundations, but I feel we are beginning to stray from the end goal and spending time on certification / conformance testing that has little, if any, benefit whilst missing some of the plot.

    And I must admit – I am certainly a contrarian. However, the thoughts I express here, while they are certainly mine, are not just my own. I know many that agree but have not voiced their concerns as loudly.

    I’d like to thank all for the comments and for the terrific debate.

  7. Sorry to be so late to the game on this thread … good thoughts all around … concerning the TM Forum specifically, my own views tend to support and extend the perspective raised by Steve Cotton’s input: While the TM Forum interface specs have contributed materially to industry progress over the years (and TIP will likely continue to do so), our major value as an industry organization emanates from much broader contributions. The TM Forum and its Frameworx perform a service similar in nature to a “Good Housekeeping” seal of approval type program — i.e., service providers, equipment manufacturers, software developers, and solution integrators who adopt (and adapt) TM Forum guidance attain a level of credibiity within the industry (esp. when backed up with active participatin in the development, demonstration, and refinement of the TM Forum outputs) … such adoption/adaptation and participation also provide an objective basis for comparabiity among industry alternatives … lastly (but definitely not leastly), TM Forum Frameworx provides a foundation for new entrants into the industry, not as “barriers to entry” but as reasonable expectations of commercial viability … while leaving ample room for innovation, value-add, and other forms of competitive differentiation.

  8. Coincidentally I jumped on the boat of so great discussion, inspired…

    Taking an example of 3GPP, OMA, TMF for which I deeply involved for many years, holding many years WG chairship), and now
    leading Huawei Software TMF Standard activity. (Even though I well know many of you, but reversely it may not be the case)

    I would agree most of arguments here in light of some negative feeling over the TMF standard adoption, some view from standard creation culture persepctive as my two cents.

    3GPP, which might have widely adoption, the whole UMTS/LTE network is vertical partitioned into radio access network/
    system application and core network.
    The essential delivery is always architecture and interface definition oriented.

    OMA is a bit different, the scope is dynamic as contribution driven, once four qualified supportting companies, a WI can be created. The delivery also Arch/interface oriented. But OMA is suffering a same problem, i.e. the standard adoption, and as a consequence it sufferred a decline of participating from near 1,000, to now below 200.

    TMF so far is successful in its standard promotiong+key note speaking TMW, collaborated with standard creation(frameworx+data model)

    Confirmance certification, in contrast with IOP, it’s apparently cost-saving, but also can ensure the direction is a unified one. I can understand for BSS/OSS world, this kind of level of IOP might make sense to address less-demanded interconnection than signalling
    level interface does. Huawei recently passed a series of frameworx certification, I can feel the process itself has positive influence over adjustment of internal frameworx.

    So someting is being concluded no significant proof to show this is not successful, architecture interface oriented or process+data model oriented.
    Idea came to me for TMF activity might be many topices initiated in TMF can’t always lead to a tangible delivery. Its development is too dynamic to bring divergent. Progress can’t always actully address its original demand. SDF may be a live example. If some operation like
    TMF catalyst operation, circled visiable supporting company, and skeletonized the delivery, formal requirement doc and output definition, it would be much helpful. I believe in the value of TMF, but it’s also time to change.

Leave a Reply

Your email address will not be published. Required fields are marked *

*

Please type the characters of this captcha image in the input box

Please type the characters of this captcha image in the input box

This site uses Akismet to reduce spam. Learn how your comment data is processed.