RSS

API Performance News

These are the news items I've curated in my monitoring of the API space that have some relevance to the API definition conversation and I wanted to include in my research. I'm using all of these links to better understand how the space is talking about and addressing performance concerns around APIs.

Helping The Federal Government Get In Tune With Their API Uptime And Availability

Nobody likes to be told that their APIs are unreliable, and unavailable on a regular basis. However, it is one of those pills that ALL APIs have to swallow, and EVERY API provider should be paying for an external monitoring service to tell us when are APIs are up or down. Having a monitoring service to tell us when our APIs are having problems, complete with a status dashboard, and history of our API’s availability are essential building blocks of any API provider. If you expect consumers to use your API, and bake it into their systems and applications, you should committed to a certain level of availability, and offering a service level agreement if possible.

My friends over at APImetrics monitor APIs across multiple industries, but we’ve been partnering to keep an eye on federal government APIs, in support of my work in DC. They’ve recently shared an informative dashboard tracking on the performance of federal government APIs, providing an interesting view of the government API landscape, and the overall reliability of APIs they provide.

They continue by breaking down the performance of federal government APIs, including how the APIs perform across multiple North American regions across four of the leading cloud providers:

Helping us visualize the availability of federal government APIs for the last seven days, by applying their APImetrics CASC score:

I know it sucks being labeled as one of the worst performing APIs, but you also have the opportunity to be named one the best performing APIs. ;-) This is a subject that many private sector companies struggle with, and the federal government has an extremely poor track record for monitoring their APIs, let alone sharing the information publicly. Facing up to this stuff sucks, and you are forced to answer some difficult questions about your operations, but it is also something can’t be ignored away when you have a public API

You can view the US Government API Performance Dashboard for July 2018 over at APImetrics. If you work for any of these agencies and would like to have a conversation your API monitoring, testing, and performance strategy, I am happy to talk. I know the APImetrics team are happy to help to, so don’t stay in denial about your API performance and availability. Don’t be embarrassed. Tackle the problem head on, improve your overall quality of service, and then having an API monitoring and performance dashboard publicly available like this won’t hurt nearly as much–it will just be a normal part of operating an API that anyone can depend on.


OpenAPI Is The Contract For Your Microservice

I’ve talked about how generating an OpenAPI (fka Swagger) definition from code is still the dominate way that microservice owners are producing this artifact. This is a by-product of developers seeing it as just another JSON artifact in the pipeline, and from it being primarily used to create API documentation, often times using Swagger UI–which is also why it is still called Swagger, and not OpenAPI. I’m continuing my campaign to help the projects I’m consulting on be more successful with their overall microservices strategy by helping them better understand how they can work in concert by focus in on OpenAPI, and realizing that it is the central contract for their service.

Each Service Beings With An OpenAPI Contract There is no reason that microservices should start with writing code. It is expensive, rigid, and time consuming. The contract that a service provides to clients can be hammered out using OpenAPI, and made available to consumers as a machine readable artifact (JSON or YAML), as well as visualized using documentation like Swagger UI, Redocs, and other open source tooling. This means that teams need to put down their IDE’s, and begin either handwriting their OpenAPI definitions, or being using an open source editor like Swagger Editor, Apicurio, API GUI, or even within the Postman development environment. The entire surface area of a service can be defined using OpenAPI, and then provided using mocked version of the service, with documentation for usage by UI and other application developers. All before code has to be written, making microservices development much more agile, flexible, iterative, and cost effective.

Mocking Of Each Microservice To Hammer Out Contract Each OpenAPI can be used to generate a mock representation of the service using Postman, Stoplight.io, or other OpenAPI-driven mocking solution. There are a number of services, and tooling available that takes an OpenAPI, an generates a mock API, as well as the resulting data. Each service should have the ability to be deployed locally as a mock service by any stakeholder, published and shared with other team members as a mock service, and shared as a demonstration of what the service does, or will do. Mock representations of services will minimize builds, the writing of code, and refactoring to accommodate rapid changes during the API development process. Code shouldn’t be generated or crafted until the surface area of an API has been worked out, and reflects the contract that each service will provide.

OpenAPI Documentation Always AVailable In Repository Each microservice should be self-contained, and always documented. Swagger UI, Redoc, and other API documentation generated from OpenAPI has changed how we deliver API documentation. OpenAPI generated documentation should be available by default within each service’s repository, linked from the README, and readily available for running using static website solutions like Github Pages, or available running locally through the localhost. API documentation isn’t just for the microservices owner / steward to use, it is meant for other stakeholders, and potential consumers. API documentation for a service should be always on, always available, and not something that needs to be generated, built, or deployed. API documentation is a default tool that should be present for EVERY microservice, and treated as a first class citizen as part of its evolution.

Bringing An API To Life Using It’s OpenAPI Contract Once an OpenAPI contract has been been defined, designed, and iterated upon by service owner / steward, as well as a handful of potential consumers and clients, it is ready for development. A finished (enough) OpenAPI can be used to generate server side code using a popular language framework, build out as part of an API gateway solution, or common proxy services and tooling. In some cases the resulting build will be a finished API ready for use, but most of the time it will take some further connecting, refinement, and polishing before it is a production ready API. Regardless, there is no reason for an API to be developed, generated, or built, until the OpenAPI contract is ready, providing the required business value each microservice is being designed to deliver. Writing code, when an API will change is an inefficient use of time, in a virtualized API design lifecycle.

OpenAPI-Driven Monitoring, Testing, and Performance A read-to-go OpenAPI contract can be used to generate API tests, monitors, and deliver performance tests to ensure that services are meeting their business service level agreements. The details of the OpenAPI contract become the assertions of each test, which can be executed against an API on a regular basis to measure not just the overall availability of an API, but whether or not it is actually meeting specific, granular business use cases articulated within the OpenAPI contract. Every detail of the OpenAPI becomes the contract for ensuring each microservice is doing what has been promised, and something that can be articulated and shared with humans via documentation, as well as programmatically by other systems, services, and tooling employed to monitor and test accordingly to a wider strategy.

Empowering Security To Be Directed By The OpenAPI Contract An OpenAPI provides the entire details of the surface area of an API. In addition to being used to generate tests, monitors, and performance checks, it can be used to inform security scanning, fuzzing, and other vital security practices. There are a growing number of services and tooling emerging that allow for building models, policies, and executing security audits based upon OpenAPI contracts. Taking the paths, parameters, definitions, security, and authentication and using them as actionable details for ensuring security across not just an individual service, but potentially hundreds, or thousands of services being developed across many different teams. OpenAPI quickly is becoming not just the technical and business contract, but also the political contract for how you do business on web in a secure way.

OpenAPI Provides API Discovery By Default An OpenAPI describes the entire surface area for the request and response of each API, providing 100% coverage for all interfaces a services will possess. While this OpenAPI definition will be generated mocks, code, documentation, testing, monitoring, security, and serving other stops along the lifecycle, it provides much needed discovery across groups, and by consumers. Anytime a new application is being developed, teams can search across the team Github, Gitlab, Bitbucket, or Team Foundation Server (TFS), and see what services already exist before they begin planning any new services. Service catalogs, directories, search engines, and other discovery mechanisms can use OpenAPIs across services to index, and make them available to other systems, applications, and most importantly to other humans who are looking for services that will help them solve problems.

OpenAPI Deliver The Integration Contract For Client OpenAPI definitions can be imported in Postman, Stoplight, and other API design, development, and client tooling, allowing for quick setup of environment, and collaborating with integration across teams. OpenAPIs are also used to generate SDKs, and deploy them using existing continuous integration (CI) pipelines by companies like APIMATIC. OpenAPIs deliver the client contract we need to just learn about an API, get to work developing a new web or mobile application, or manage updates and version changes as part of our existing CI pipelines. OpenAPIs deliver the integration contract needed for all levels of clients, helping teams go from discovery to integration with as little friction as possible. Without this contract in place, on-boarding with one service is time consuming, and doing it across tens, or hundreds of services becomes impossible.

OpenAPI Delivers Governance At Scale Across Teams Delivering consistent APIs within a single team takes discipline. Delivering consistent APIs across many teams takes governance. OpenAPI provides the building blocks to ensure APIs are defined, designed, mocked, deployed, documented, tested, monitored, perform, secured, discovered, and integrated with consistently. The OpenAPI contract is an artifact that governs every stop along the lifecycle, and then at scale becomes the measure for how well each service is delivering at scale across not just tens, but hundreds, or thousands of services, spread across many groups. Without the OpenAPI contract API governance is non-existent, and at best extremely cumbersome. The OpenAPI contract is not just top down governance telling what they should be doing, it is the bottom up contract for service owners / stewards who are delivering the quality services on the ground inform governance, and leading efforts across many teams.

I can’t articulate the importance of the OpenAPI contract to each microservice, as well as the overall organizational and project microservice strategy. I know that many folks will dismiss the role that OpenAPI plays, but look at the list of members who govern the specification. Consider that Amazon, Google, and Azure ALL have baked OpenAPI into their microservice delivery services and tooling. OpenAPI isn’t a WSDL. An OpenAPI contract is how you will articulate what your microservice will do from inception to deprecation. Make it a priority, don’t treat it as just an output from your legacy way of producing code. Roll up your sleeves, and spend time editing it by hand, and loading it into 3rd party services to see the contract for your microservice in different ways, through different lenses. Eventually you will begin to see it is much more than just another JSON artifact laying around in your repository.


Delivering Large Api Responses As Efficiently As Possible

404: Not Found


The Impact Of Availability Zones, Regions, And API Deployment Around The Globe

Werner Vogels shared a great story looking back at 10 years of compartmentalization at AWS, where he talks about the impact Amazon has made on the landscape by allowing for the deployment of resources into different cloud regions, zones, and jurisdictions. I agree with him regarding the significant impact this has had on how we deliver infrastructure, and honestly isn’t something that gets as much recognition and discussion as it should. I think this is partly due to the fact that many companies, organizations, institutions, and governments are still making their way to the cloud, and aren’t far enough in their journeys to be able to sufficiently take advantage of the different availability zones.

In Werner’s piece he focuses on the availability, scalability, and redundancy benefits of operating in different zones. Which I think gets at the technical benefits of this benefit of operating infrastructure in the cloud, but there are also significant business, and even political considerations at play here. As the web matures, the business and political implications of being able to operate precisely within a specific region, zone, and jurisdiction is becoming increasingly important. Sure, you want your API infrastructure to be reliable, redundant, and failover when there has been an outage in a specific region, but increasingly clients are asking for APIs to be delivered close to where business occurs, and regulatory bodies are beginning to mandate that digital business gets done within specific borders as well.

Regions have become a top level priority for Amazon, Azure, and Google. Clearly, they are also becoming a top level priority for their customers who operate within their clouds. It is one of those things I notice evolving across the technology landscape and have felt the need to pay attention to more as I see more activity and chatter. I’ve begun documenting which regions each of the cloud providers are operating in, and have been increasing the number of stories I’m writing about the potential for API providers, as well as API service providers. So it was good to see Werner reflecting on the significant role regions have played in the evolution of the cloud, and backing up what I’m already feeling and seeing across the sector.

While Werner focused on the technical benefits, I think the political, legal, and regulatory benefits will soon dwarf the technical ones. While the web has enjoyed a borderless existence for the last 25 years, I think we are going to start seeing things change in the next decade. Making cloud regions more about maintaining control over your countries digital assets, how you generate tax revenue, and defend the critical digital infrastructure of your nation from your enemies. The cloud providers who are empowering companies, organizations, institutions, and government agencies to securely, but flexibly operate in multiple regions are going to be in a good position. Similarly, the API providers, and service providers who behave in a similar way, delivering API resources in a multi-cloud way, are going to emerge as the strongest players in the API economy.


Riot Games Regional API Endpoints

I’m slowly categorizing all the APIs I find who are offering up some sort regional availability as part of their operations. With the easy of deployment using leading cloud services, it is something I am beginning to see more frequently. However, there is still a wide variety of reasons why an API provider will invest in this aspect of their operations, and I’m looking to understand more about what these motivations are. Sometimes it is because they are serving a global audience, and latency kills the experience, but other times I’m seeing it is more about the maturity of the API provider, and they’ve have such a large user base that they are getting more requests to deliver resources closer to home.

The most recent API provider I have come across who is offering regional API endpoints is from Riot Games, the makers of League of Legends, who offers twelve separate regions for you to chose from, broken down using a variety of regional subdomains. The Riot Games API provides a wealth of meta data around their games, and while they don’t state their reasons for providing regional APIs, I’m guessing it is to make sure the meta data is localized to whichever country their customers are playing in. Reducing an latency across networks, making the overall gaming and supporting application experience as smooth and seamless as possible. Pretty standard reasons for doing regional APIs, and providing a simple example of how you do this at the DNS level.

RIot Games also provides a regional breakdown of the availability of their regional endpoints on their API status page, adding another dimension to the regional API delivery conversation. If you are providing regional APIs, you should be monitoring them, and communicating this to your consumers. This is all pretty standard stuff, but I’m working to document every example of regional APIs I come across as part of my research. I’m considering adding a separate research area to track on the different approaches so I can publish a guide, and supporting white papers when I have enough information organized. All part of my work to understand how the API business operates, and is expanding. Showcasing how the leaders are delivering resources via APIs in a scalable way.


Reducing Polling Of Your Existing API Using Streamdata.io

I’ve partnered with Streamdata.io, resulting in me getting more acquainted with their API solutions, and telling the story of that process here on API Evangelist. I figured I would dive right in and start with the basics of what Streamdata.io does–turning your existing web API into a real-time stream. Streamdata.io acts as a reverse proxy that translates REST API polling into a stream of data. Instead of constantly polling your API for changes, your API clients will poll Streamdata.io and get a JSON Patch update if anything has changed, and reducing the impact of the requests your clients will make to your API.

When thinking about what Streamdata.io does it is easy to get caught up on the real time and streaming nature of what they do, but the most immediate value they bring to the table is about making your relationship with your API clients more efficient. Streamdata.io reduces the costs associated with operating your API, stepping in between you and your demanding clients, and act as a buffer that will reduce the load on your servers. Eliminating one of the biggest headaches for API providers, and reigning in the behavior by our most demanding, and demanding clients.

I’m always surprised by the answers I get from API providers when I ask them why they rate limit their APIs. I’d say that 80% of the time it is based upon reducing the overhead and impact on backend systems, and dealing with the bad behavior of API consumers. Streamdata.io provides a pretty compelling solution to help alleviate this reality of operating APIs for most API providers. It isn’t just about making things real-time, it is more about cost savings, and minimizing the impact of API consumption on our back-end solutions. Making rate limiting irrelevant, unless you have some other specific business needs behind your decision.

There are numerous other benefits Streamdata.io brings to the table, but reducing the load on your APIs probably the most relevant to ALL of my readers who operate APIs. We can always do better when it comes to making our APIs more efficient, and Streamdata.io is a way we can do this with minimal costs, in minutes, not days, weeks, or months. Which is one of the primary reasons I am partnering with Streamdata.io. It is a service I find easy to push as part of my API storytelling here on the blog, and happy to have become part of the team.

Disclosure: Streamdata.io is the primary partner for the API Evangelist website.


Reducing Polling Of Your Existing API Using Streamdata.io

I’ve partnered with Streamdata.io, resulting in me getting more acquainted with their API solutions, and telling the story of that process here on API Evangelist. I figured I would dive right in and start with the basics of what Streamdata.io does–turning your existing web API into a real-time stream. Streamdata.io acts as a reverse proxy that translates REST API polling into a stream of data. Instead of constantly polling your API for changes, your API clients will poll Streamdata.io and get a JSON Patch update if anything has changed, and reducing the impact of the requests your clients will make to your API.

When thinking about what Streamdata.io does it is easy to get caught up on the real time and streaming nature of what they do, but the most immediate value they bring to the table is about making your relationship with your API clients more efficient. Streamdata.io reduces the costs associated with operating your API, stepping in between you and your demanding clients, and act as a buffer that will reduce the load on your servers. Eliminating one of the biggest headaches for API providers, and reigning in the behavior by our most demanding, and demanding clients.

I’m always surprised by the answers I get from API providers when I ask them why they rate limit their APIs. I’d say that 80% of the time it is based upon reducing the overhead and impact on backend systems, and dealing with the bad behavior of API consumers. Streamdata.io provides a pretty compelling solution to help alleviate this reality of operating APIs for most API providers. It isn’t just about making things real-time, it is more about cost savings, and minimizing the impact of API consumption on our back-end solutions. Making rate limiting irrelevant, unless you have some other specific business needs behind your decision.

There are numerous other benefits Streamdata.io brings to the table, but reducing the load on your APIs probably the most relevant to ALL of my readers who operate APIs. We can always do better when it comes to making our APIs more efficient, and Streamdata is a way we can do this with minimal costs, in minutes, not days, weeks, or months. Which is one of the primary reasons I am partnering with Streamdata.io. It is a service I find easy to push as part of my API storytelling here on the blog, and happy to have become part of the team.

Disclosure: [Streamdata.io](https://streamdata.io is the primary partner for the API Evangelist website.


Connecting Service Level Agreement To API Monitoring

Monitoring your API availability should be standard practice for internal and external APIs. If you have the resources to custom build API monitoring, testing, and performance infrastructure, I am guessing you already have some pretty cool stuff in place. If you don’t, then you should not be reinventing the wheel out there, and you should be leveraging one of the existing API monitoring services out there on the market. When you are getting started with monitoring your APIs I recommend you begin with uptime and downtime, and once you deliver successfully on that front, I recommend you work on API performance, and the responsiveness of your APIs.

You should begin with making sure you are delivering the service level agreement you have in place with your API consumers. What, you don’t have a service level agreement? No better time to start than now. If you don’t already have an explicitly stated SLA in place, I recommend creating one internally, and see what you can do to live up to your API SLA, then once you ensure things are operating at acceptable levels, you share with your API consumers. I am guessing they will be pretty pleased to hear that you are taking the initiative to offer an SLA, and are committed enough to your API to work towards such a high bar for API operations.

To help you manage defining, and then ultimately monitoring and living up to your API SLA, I recommend taking a look at APIMetrics, who is obsessively focused on API quality, performance, and reliability. They spend a lot of time monitoring public APIs, and have developed a pretty sophisticated approach to ranking and scoring your API to ensure you meet your SLA. As you can see in the picture for this story, the APIMetrics administrative dashboard provides a pretty robust way for you to measure any API you want, and establish metrics and triggers that let you know if you’ve met or failed to meet your SLA requirements. As I said, you could start out by monitoring internally if you are nervous about the results, but once you are ready to go prime time you have the tools to help you regularly reporting internally, as well as externally to your API consumers.

I wish that every stop along the life cycle had a common definition for defining a specific aspect of service level agreements, and was something that multiple API providers could measure and report upon similar to what APIMetrics does for monitoring and performance. I’d like to see API design begin to have a baseline definition, that was verifiable through a common set of machine readable API assertions. I’d love for API plans, pricing, and even terms of service measurable, reportable, in a similar way. These are all things that should be observable through existing outputs, and reflected as part of service level agreements. I’d love to see the concept of the SLA evolve to cover all aspects of the quality of service beyond just availability. APIMetrics provides a good look at how the services we use to manage our APIs can be used to define the level of service we provide, something that we can be emulating more across our API operations.


Caching For Your API Is Easier Than You Think And Something You Should Invest In

I’m encountering more API providers who have performance and scalability concerns with their APIs, who are making technical procurement decisions (gateways, proxies, etc) based upon these challenges, but have not invested any time or energy into planning and optimization of caching for their existing web servers that are delivering their APIs. Caching is another aspect of HTTP that I keep finding folks have little or no awareness of, and do not consider more investment in it to assist them in alleviating their scalability and performance concerns.

There was a meeting I attended a couple weeks back where an API implementation was concerned about a new project for bulk loading and syncing of data between multiple external systems and their own, because of the strain it put on their database. Citing that they received millions of website, and API calls daily, they said they could not take the added load on their already strained systems during the day, limiting this type of activity to a narrow window at night. I began inquiring regarding caching practices in place on web, and API traffic, and they acknowledged that they new of no such activity or practices in place. This isn’t uncommon in my experiences, and I regularly encounter IT groups who just don’t have the time and HTTP awareness to implement any coherent strategy–this particular one just happened to admit it.

My friends over at the API Academy have a great post on caching for RESTful and Hypermedia APIs, so I won’t be addressing the details of HTTP, and how you can optimize your APIs in this way. API caching isn’t an unproven technology, and it is a well known aspect of operating on the web, but it does take some investment and awareness. Like API design in general, you have to get to know the resources you are serving up, understand how your consumers are putting these resources to work, and adjust, dial-in, and tweak your caching strategy. It is something that gets incrementally harder, the more time zones you operate in, but with some investment you can significantly increase the scalability of your APIs, the performance of properly cached paths, and do more with less resources. Scaling the size of your server isn’t always the first sensible thing you should be doing, a coherent caching strategy will be a much wiser and cost-effective approach in the long run.

A lack of API caching strategy amongst my clients and readers has a damaging effect on API operations. However, I’d say the most damage done isn’t by the lack of a strategy, it is the reverberating decisions made around the inability to properly scale, and deliver the performance API clients are needing. I see many technology procurement decisions being made where scalability and performance are a major part of the conversation and decision making process. Where conversations around API caching have never occurred. This is just lazy. This is just ignoring one of the key tenets of what makes the web work. This is just investing in technical debt, over making sensible architectural decisions, and spending the time to get to know the resources you are serving up, and how your customers are using them. Learning about HTTP, and caching does take some investment and planning, but it is nowhere the investment and planning that will be required to unwind the technical debt you’ve acquired made from the other bad technology purchasing decisions you’ve made along the way.


Understanding Global API Performance At The Multi-Cloud Level

APIMetrics has a pretty addictive map showing the performance of API calls between multiple cloud providers, spanning many global regions. The cloud location latency map “shows relative performance of a standard, reference GET request made to servers running on all the Google locations and via the Google global load balancer. Calls are made from AWS, Azure, IBM and Google clouds and data is stored for all steps of the API call process and the key percentiles under consideration.”

It is interesting to play with the destination of the API calls, changing the region, and visualizing how API calls begin to degrade to different regions. It really sets the stage for how we should start thinking about the deployment, monitoring, and testing of our APIs. Region, by region, getting to know where our consumers are, and making sure APIs are deployed within the cloud infrastructure that delivers the best possible performance. It’s not just testing your APIs in a single location from many locations, it is also rethinking where your APIs are deployed, leveraging a multi-cloud reality and using all the top cloud provider, while also making API deployment by region a priority.

I’m a big fan of what APIMetrics is doing with the API performance visualizations and mapping. However, I think their approach to using HTTPbin is a significant part of this approach to monitoring and visualizing API performance at the multi-cloud level, while also making much of the process and data behind it all public. I want to put some more thought into how they are using HTTPbin behind this approach to multi-cloud API performance monitoring. I feel like there is potential her for applying this beyond just API performance, and think about other testing, security, and critical aspects of reliability and doing business online with APIs today.

After thinking where else this HTTPbin approach to data gathering could be applied, I want to think more about how the data behind APIMetrics cloud location latency map can be injected into other conversations, when it comes where we are deploying APIs, and running our API tests. Eventually I would like to see this type of multi-cloud API performance data alongside data for security and privacy compliance data, and even the regulations of each country as they apply to specific industries. Think about a time when we can deploy our APIs exactly where we want them based upon performance, privacy, security, regulations, and other critical aspects of doing business in the Internet age.


Internet Connectivity As A Poster Child For How Markets Work Things Out

I have a number of friends who worship markets, and love to tell me that we should be allowing them to just work things out. They truly believe in the magical powers of markets, that they are great equalizers, and work out all the worlds problems each day. ALL the folks who tell me this are dudes, with 90% being white dudes. From their privileged vantage point, markets are what brings balance and truth to everything–may the best man win. Survival of the fittest. May the best product win, and all that that delusion.

From my vantage point markets work things out for business leaders. Markets do not work things out for people. Markets don’t care about people with disabilities. Markets don’t see education and healthcare any differently than it sees financial products and commodities–it just works to find the most profit it possibly can. Markets work so diligent and blindly towards this goal, it will even do this to its own detriment, while believers think this is just how things should be–the markets decided.

I see Internet connectivity as a great example of markets working things out. We’ve seen consolidation of network connections into the hands of a few cable and telco giants. These market forces are looking to work things out and squeeze every bit of profit out of it’s networks that it can, completely ignoring the opportunities that are available when the networks operate at scale, and freely operate to protect everyone’s benefits. Instead of paying attention to the bigger picture, these Internet gatekeepers are all about squeezing every nickel they can for every bit of bandwidth that is currently being transmitted over the network.

The markets that are working the Internet out do not care if the bits on the network are from a school, a hospital, or you playing an online game and watching videos–it just wants to meter and throttle them. It may care just enough to understand where it can possible charge more because it is a matter of life or death, or it is your child’s education, so you are willing to pay more, but as far as actually equipping our world with quality Internet–it could care less. Cable providers and telco operators are in the profit making business, using the network that drives the Internet, even at the cost of the future–this is how short sighted markets are.

AT&T, Verizon, and Comcast do not care about the United States remaining competitive in a global environment. They care about profits. AT&T, Verizon, and Comcast do not care about folks in rural areas possessing quality broadband to remain competitive with metropolitan areas. They care about profits. In these games, markets may work things out between big companies, deciding who wins and loses, but markets do not work things out for people who live in rural areas, or depend on Internet for education and healthcare. Markets do not work things out for people, they work things out for businesses, and the handful of people who operate these businesses.

So, when you tell me that I should trust that markets will work things out, you are showing me that you do not care about people. Except for those handful of business owners who are hoping you will some day be in the club with. Markets rarely ever work things out for average people, let alone people of color, with disabilities, and beyond. When you tell me about the magic of markets, you are demonstrating to me that you don’t see these layers of society. Which demonstrates your privilege, your lack of empathy for the humans around you, while also demonstrating how truly sad your life must be, because it is lacking in meaningful interactions with a diverse slice of the life we are living on this amazing planet.


The Growing Importance of Geographic Regions In API Operations

I have been revisiting my earlier work on an API rating system. One area that keeps coming up as I’m working is around the availability of APIs in a variety of regions, and the cloud platforms that are driving them. I have talked about regional availability of APIs for some time now, keeping an eye on how API providers are supporting multiple regions, as well as the expanding world of cloud computing that is powering these regional examples of providing and consuming APIs.

I have been watching Amazon rapidly expand their available regions, as well as Google and Microsoft racing to catch up. But I am starting to see API providers like Digital Ocean providing APIs for getting at geographic region information, and Amazon provides API methods for getting the available regions for Amazon EC2 compute–I will have to check if this is standard across all services. Twilio has regions for their API client, and Runscope has a region API for managing how you run API tests from a variety of regions. The role of geographic regions when it comes to providing APIs, as well as consuming APIs is increasingly part of the conversation when you visit the most mature API platforms, and something that keeps coming up on my radar.

We are still far from the average company being able to easily deploy, deprecate, and migrate APIs seamlessly across cloud providers and geographic regions, but as APIs become smaller and more modular, and cloud providers add more regions, and APIs to support automation around these regions, we will begin to see more decisions being made at deploy and run time regarding where you want to deploy or consume your API resources. To be able to do this we are going to need a lot more data and common schema regarding the what geographic regions are available for deployment, what services operate in which regions, and other key considerations about exactly where our resources should operate. This is why I’m revisiting this work, to see what I can do to get API service providers to share more data from either the API provider or consumer side of the equation.

I am considering adding an area of my research dedicated to API regions, aggregating examples of how geographic regions are playing a role in API operations. I’m thinking region availability will be playing just as significant role as performance, plans, security, reliability, and other areas of the API lifecycle when it comes to deciding where you deploy or consume your APIs. It feels like another one of the aspects of API operations that will overlap with many stops along the API lifecycle–not just deployment. One of the areas of the API lifecycle I’m increasingly thinking about that will affect geographic API decisions is regulations, and how governments are dictating what is acceptable when it comes to the storage, transmission, and access of digital resources. It feels like early notions of what the World Wide Web has been for the last 25 years is about to be blown out of the water, with the influences of digital nationalism, regulation, or even the Internet moving off planet, and increasingly driven by satellite infrastructure.


The Growing Importance of Geographic Regions In API Operations

I have been revisiting my earlier work on an API rating system. One area that keeps coming up as I’m working is around the availability of APIs in a variety of regions, and the cloud platforms that are driving them. I have talked about regional availability of APIs for some time now, keeping an eye on how API providers are supporting multiple regions, as well as the expanding world of cloud computing that is powering these regional examples of providing and consuming APIs.

I have been watching Amazon rapidly expand their available regions, as well as Google and Microsoft racing to catch up. But I am starting to see API providers like Digital Ocean providing APIs for getting at geographic region information, and Amazon provides API methods for getting the available regions for Amazon EC2 compute–I will have to check if this is standard across all services. Twilio has regions for their API client, and Runscope has a region API for managing how you run API tests from a variety of regions. The role of geographic regions when it comes to providing APIs, as well as consuming APIs is increasingly part of the conversation when you visit the most mature API platforms, and something that keeps coming up on my radar.

We are still far from the average company being able to easily deploy, deprecate, and migrate APIs seamlessly across cloud providers and geographic regions, but as APIs become smaller and more modular, and cloud providers add more regions, and APIs to support automation around these regions, we will begin to see more decisions being made at deploy and run time regarding where you want to deploy or consume your API resources. To be able to do this we are going to need a lot more data and common schema regarding the what geographic regions are available for deployment, what services operate in which regions, and other key considerations about exactly where our resources should operate. This is why I’m revisiting this work, to see what I can do to get API service providers to share more data from either the API provider or consumer side of the equation.

I am considering adding an area of my research dedicated to API regions, aggregating examples of how geographic regions are playing a role in API operations. I’m thinking region availability will be playing just as significant role as performance, plans, security, reliability, and other areas of the API lifecycle when it comes to deciding where you deploy or consume your APIs. It feels like another one of the aspects of API operations that will overlap with many stops along the API lifecycle–not just deployment. One of the areas of the API lifecycle I’m increasingly thinking about that will affect geographic API decisions is regulations, and how governments are dictating what is acceptable when it comes to the storage, transmission, and access of digital resources. It feels like early notions of what the World Wide Web has been for the last 25 years is about to be blown out of the water, with the influences of digital nationalism, regulation, or even the Internet moving off planet, and increasingly driven by satellite infrastructure.


APIs For Monitoring The Performance Of Your APIs

I am a big fan of API providers who also have APIs. It may sound silly to say, but you would be surprised how many companies are selling services to API providers and do not actually have an API themselves. So, anytime I find a good example of API service providers launching new APIs that help API providers be more successful, I’m all over it with a story.

Today’s example is from my friends over at Runscope with their API Metrics API that lets you “retrieve your API tests performance metrics for each individual test, keep a pulse on your API’s performance over time, and create custom internal or external dashboards with it”. You can filter the request by using 3 different parameters:

  • region - The service region you’re using to run your tests (e.g. us1, us2, eu1, etc.)
  • timeframe - Hour, day, week, or month. Depending on the timeframe you use, the interval between the response times will be different.
  • environment_uuid - Filter by a specific environment, such as test, production, etc.

That is a pretty healthy example of everything that is API for me–an API that helps you make sure your APIs are performing as expected. You can not just understand how well your API responds, you can dial that in by region, and paint a clear picture of how well you are doing over time. I like that you can create internal dashboards for communicating this with your organization, but I also like their approach to providing external API performance dashboards so much I am going to add it to my list of building blocks I track on as part of my API performance research.

Aight. That concludes today’s showcase of an API service provider making sure they are practicing what they preach and providing APIs for their valuable services. Honestly, I find this to be a fascinating layer of the API sector–the API layer that can orchestrate APIs. I enjoy thinking about what is possible when your APIs have APIs–it makes something like API performance a much more obtainable, scalable, and as Runscope does it, something you can easily communicate with your internal stakeholders and your API community.


To Incentivize API Performance, Load, And Security Testing, Providers Should Reduce Bandwidth And Compute Costs Asscociated

I love that AWS is baking monitoring testing by default in the new Amazon API Gateway. I am also seeing new service from AWS, and Google providing security and testing services for your APIs, and other infrastructure. It just makes sense for cloud platforms to incentivize security of their platforms, but also ensure wider success through the performance and load testing of APIs as well.

As I'm reading through recent releases, and posts, I'm thinking about the growth in monitoring, testing, and performance services targeting APIs, and the convergence with a growth in the number of approaches to API virtualization, and what containers are doing to the API space. I feel like Amazon baking in monitoring and testing into API deployment and management because it is in their best interest, but is also something I think providers could go even further when it comes to investment in this area.

What if you could establish a stage of your operations, such as QA, or maybe production testing, and the compute and bandwidth costs associated with operations in these stages were significantly discounted? Kind of like the difference in storage levels between Amazon S3 and Glacier, but designed specifically to encourage monitoring, testing, and performance on API deployments.

Maybe AWS is already doing this and I've missed it. Regardless it seems like an interesting way that any API service provider could encourage customers to deliver better quality APIs, as well as help give a boost to the overall API testing, monitoring, and performance layer of the sector. #JustAThought


The New Mind Control APIs That Salesforce Is Testing On Conference Attendees Is Available To Premier Partners

The Dreamforce conference is happening this week in San Francisco, a flagship event for the Platform as a Service (PaaS) company. Salesforce is one of the original pioneers in API technology, allowing companies to empower their sales force, using the the latest in technology. Something that in 2015, Salesforce is taking this to the next level, with a handful of attendees, and partners in attendance at the conference.

Using smart pillow technology, Salesforce will be testing out a new set of subliminal mind control APIs. All attendees of the Dreamforce conference have agreed to be part of the tests, through their acceptance of the event terms of service, but only a small group of 500 individuals will actually be targeted. Exactly which attendees are selected will be a secret, even from the handful of 25 partners who will be involved in the test. 

Through carefully placed hotel pillows, targeted attendees will receive subliminal messages, transmitted via smart pillow APIs developed by Salesforce. Messages will be crafted in association with partners, testing out concepts of directing attendees what they will eat the next day, which sessions they are attending, where they will be going in the exhibit hall, and who they will be networking with The objective is to better understand how open the conference attendees are open to suggestion, in an conference environment.

While some partners of this mind control trial are just doing random tests to see if the technology works, others are looking to implement tasks that are in sync with their sales objectives. Ernst Stavro Blofeld, CEO of Next Generation Staffing Inc, says "the Salesforce test represents the future of industry, and the workforce--this weeks test is about seeing what we can accomplish at a conference, but represents what we will be able to achieve in our workforce on a daily basis."

Salesforce reminded us that this is just a simple test, but an important one that reflects the influence the company already has over its constituents. The company enjoys one of the most loyal base of business users, out of all leading software companies in the world, and this new approach to targeting a loyal base of users, is just the beginning of a new generation of API engineered influence.


Testing New Publishing

This is my testing.


Introducing Runscope Metrics: API Performance and Usage Analytics


Some of your most popular feature requests have been for reporting and analytics -- specifically performance (latency) and usage (consumption) metrics. Today, we're announcing the release of Runscope Metrics for all customers. The first two Runscope Metrics reports that we're releasing are Performance and Usage. Runscope Radar is very useful for testing if an API is operating properly. You may have noticed that while defining tests, an assertion for "Response Time (ms)" is an option. True, that catching a failing test is important (i.e. not responding within time threshold), but tracking performance data on successful tests is just as important. Gradual increases in latency (response time) can be signs of a backend service that is not scaling well. Spikes of latency can be an indication of intermittent network problems that are unrelated to the health of the backend service.  Both of these cases are common and mostly go undetected.  These spikes and gradual increases in latency are now easily spotted with visual performance graphs. By catching latency issues early, developers can investigate and address them before they grow into major problems. Keeping tabs on the number of API requests an app makes is also important. Modern APIs implement rate limits and throttles — for example, restricting apps to X calls per hour, or Y calls per day. In most cases, an app that exceeds a limit is denied access which could lead to application failure. It's unfortunate, but that's usually what it takes for developers to discover their API request capacity problem. Using Runscope Metrics, developers can stay several steps ahead by monitoring their usage. All API calls that proxy through Runscope get logged. Similar to the latency performance report, the usage report makes it easy for developers to spot both gradual trends in growth as well as spikes. This report helps developers to forecast usage trends and plan accordingly. The default reporting view for both Performance and Usage Reports is across all hosts. Finding the exact method that is experiencing latency issues, or ramping up on API call consumption, is very easy. Developers can refine the scope of each report by simply clicking a hostname from the list. From the hostname view, drilling down to the endpoint path and method is done exactly the same way. Runscope Metrics is available to all customers. If you need help understanding Runscope Metrics reports, send us a note. Our support team is standing by, ready to help.

URL: http://blog.runscope.com/posts/introducing-runscope-metrics-api-performance-and-usage-analytics

Contributing To The Testing & Monitoring Lifecycle

Contributing To The Testing & Monitoring Lifecycle

When it comes to testing, and monitoring an API, you begin to really see how machine readable API definitions can be the truth, in the contract between API provider and consumer. API definitions are being used by API testing and monitoring services like SmartBear, providing a central set of rules that can ensure your APIs deliver as promised.

Making sure all your APIs operate as expected, and just like generating up to date documentation, you can ensure the entire surface area of your API is tested, and operating as intended. Test driven development (TDD) is becoming common practice for API development, and API definitions will play an increasing role in this side of API operations.

An API definition provides a central truth, that can be used by API providers to monitor API operations, but also give the same set of rules to external API monitoring services, as well as individual API consumers. Monitoring, and understanding an API up time, from multiple external sources is becoming a part of how the API economy is stabilizing itself, and API definitions provide a portable template, that can be used across all API monitoring services.

Testing and monitoring of vital resources that applications depend on is becoming the norm, with new service providers emerging to assist in this area, and large technology companies like Google, making testing and monitoring default in all platform operations. Without a set of instructions that describe the API surface area, it will be cumbersome, and costly, to generate the automated testing and monitoring jobs necessary to produce a stable, API economy.


APITools Raises The Bar With Open, On-Premise API Testing and Monitoring Tools

APITools, the cloud-based API integration services is raising the bar for the space by introducing an open source, on-premise version of their API monitoring service. APITools only launched this year, and because of consumer demands, they moved up the timeframe for open sourcing the platform, which was already on the roadmap.

I’d say after API design, API integration services and tooling, for testing, monitoring, and transforming API calls is one of the fastest growing segments of the API space. We are seeing solid solutions from SmartBear, Runscope, TheRightAPI, Nomos Software, and from API pioneer John Musser, with API Science, but APITools is definitely raising the stakes with open sourcing theirs offering.

The world of API integration service and tooling is rapidly expanding, and only time will tell whether developers prefer running in the cloud, or on-premise, and what features they are looking for, something I've been documenting as I study what each of the companies offer.

I suspect, that along with other API design, deployment, and management tools we'll need a mix of freemium, open tiers, on-premise, and enterprise API integration services and tooling, to meet the demands of this fast growing segment that overlaps with both providing and consuming APIs.

Disclosure: APITools is a 3Scale service, and 3Scale is an API Evangelist partner.


Beta Testing Linkrot.js On API Evangelist

I started beta testing a new JavaScript library, combined with API, that I’m calling linkrot.js. My goal is to address link rot across my blogs. There are two main reasons links are bad on my site, either I moved the page or resource, or a website or other resource has gone away.

To help address this problem, I wrote a simple JavaScript file that lives in the footer of my blog, and when the page loads, it spiders all the links on the page, combining them into a single list and then makes a call to the linkrot.js API.

All new links will get a URL shortener applied, as well as a screenshot taken of the page. Every night a script will run to check the HTTP status of each link used in my site—verifying the page exists, and is a valid link.

Every time link rot.js loads, it will spider the links available in the page, sync with linkrot.js API, and the API returns the corresponding shortened URL, or if a link shows a 404 status, the link will no longer link to page, it will popup the last screenshot of the page, identifying the page no longer exists.

Eventually I will be developing a dashboard, allowing me to manage the link rot across my websites, make suggestions on links I can fix, provides a visual screen capture of those I cannot, while also adding a new analytics layer by implementing shortened URLs.

Linkrot.js is just an internal tool I’m developing in private beta. Once I get up and running, Audrey will beta test, and we’ll see where it goes from there. Who knows!


API Testing and Monitoring Finding A Home In Your Companies Existing QA Process

I've been doing API Evangelist for three years now, a world where selling APIs to existing companies outside of Silicon Valley, and often venture capital firms is a serious challenge. While APis have been around for a while in many different forms, this new, more open and collaborative approach to APis seems very foreign, new and scary for some companies and investors--resulting in them often very resistant to it.

As part of my storytelling process, I'm always looking for ways to dovetail API tools and services into existing business needs and operations, making them much more palatable to companies across many business sectors. Once part of the API space I'm just getting a handle on is the area API integration, which includes testing, monitoring, debugging, scheduling, authentication and other key challenges developers face when building applications that depend on APIs.

I was having a great conversation with Roger Guess of TheRightAPI the other day, which I try to do regularly. We are always brainstorming ideas on where the space is going and the best way to tell stories around API integration, that will resonate with existing companies. Roger was talking about the success they are finding dovetailing their testing, monitoring and other web API integration services with a company's existing QA process--something that I can see will resonate with many companies.

Hopefully your company already has a full developed QA cycle for your development team(s), including, but not limited to, automated, unit and regression testing--something where API tests, monitoring, scheduling and other emerging API integration building blocks will fit in nicely. This new breed of APi integration tools don't have to be some entirely new approach to development, chances are you are already using APIs in your development and API testing and monitoring can just be added to your existing QA toolbox.

I will spend more time looking for stories that help relate some of these new approaches to your existing QA processes, hopefully finding news ways you can put tools and services like TheRightAPI to use, helping you better manage the API integration aspect of your web and mobile application development.


Netflix API Is Much More Than A Public API

Netflix has entered the final stages of shuttering its public API last week. Its been coming for a while now, starting in June of 2012, and now is official with the platform no longer accepting new API registrations.

After reading about the changes to the Netflix Public API program on their blog, and hearing much of the news in response, everyone seems to file this away, along with the Twitter API--just another API platform screwing over its developers.

As I do, I wanted to take a step back, look at the bigger picture and try understand what happened.  On October 1st 2008, Netflix launched their public API, and they appear to have done everything right. They had a blog, solicited code samples from developers, accepted application submissions and even showcased the developers apps in the gallery. Netflix would even help promote your app to Netflix subscribers and threw hackathons. The Netflix API team worked to improve API performance, communicate regularly, but really nothing that amazing happened.

There were applications like InstaWatcher and WhichFlicks (among others) developed on the API, but as Daniel Jacobson puts it, a thousand flowers didn’t bloom. In these situations its easy to blame the API provider, but developers didn’t really step up and build anything that innovative and cool. So is this a failure of Netflix? A failure of developers to innovate? Or could it possibly be a third: failure of the API vision?

I would say the demise of the Netflix public API is equal part Netflix and the developer, and just the nature of the industry it exists in. It didn’t take me long to look through the Netflix API blog, so I can tell they didn’t put alot into evangelizing the API. But I really can’t find any innovation that occurred by developers as part of it, so I think us devs have to share some of the responsibility as well.

Several of the blog posts covering the news last week, compared this to Twitter which I think for the untrained eye of the mainstream tech blogosphere, this is easy to do. But Twitter is user generated content, via one of the newest types of content platforms, and Netflix is heavily licensed and policed content from one of the oldest content platforms. I think expecting public API success from Netflix and / or developers was a lot to ask.

I love and believe in APIs, but I’m not delusional enough to think they will work magically everywhere they are applied. However, even with the closing of the public Netflix API, I consider Netflix is an API success story. Look what they’ve done with their internal and partner APIs. They’ve managed to scale not just from the data center to the cloud, but globally and across 800+ devices--while also sharing this knowledge and wisdom with the public via their blog:

If that wasn't enough, they are also open sourcing much of the technology behind their approach:

  • eureka - AWS Service registry for resilient mid-tier load balancing and failover
  • RxJava - a library for composing asynchronous and event-based programs using observable sequences for the Java VM
  • Governator - A library of extensions and utilities that enhance Google Guice to provide: classpath scanning and automatic binding, lifecycle management, configuration to field mapping, field validation and parallelized object warmup
  • Priam - Co-Process for backup/recovery, Token Management, and Centralized Configuration management for Cassandra
  • edda - Service to track changes in your cloud recipes-rss - RSS Reader Recipes that uses several of the Netflix OSS components
  • astyanax - Cassandra Java Client
  • karyon - The nucleus or the base container for Applications and Services built using the NetflixOSS ecosystem
  • netflix-graph - Compact in-memory representation of directed graph data
  • asgard - Web interface for application deployments and cloud management in Amazon Web Services (AWS)
  • Hystrix - Hystrix is a latency and fault tolerance library designed to isolate points of access to remote systems, services and 3rd party libraries, stop cascading failure and enable resilience in complex distributed systems where failure is inevitable
  • servo - Netflix Application Monitoring Library
  • frigga - Utilities for working with Asgard named objects

When measuring the success or failures of API initatives, we can't use the same yardstick in all scenarios. When you look at the knowledge, wisdom and code that has come out of Netflix, there is no way you can say their API initiative is anything but a success. I don’t see see Netflix as a case study in how to stream movies over the web via public APIs, but a deeply important experiment in how to deliver licensed content to over 800 devices, via the next generation of APIs.  Something that probably isn't an edge case, it actually represents where we all might be headed in the near future.

Let’s not get caught up in the recent deprecation of the Netflix public API.  There is so much going on!  Let's get studying some of the knowledge and technology coming out of Netflix. I know its my motivation for writing this post, and doing this research.


From ETL to API Reciprocity, Looking at 20 Service Providers

I spent time this week looking at 20, of what I’m calling API reciprocity providers, who are providing a new generation of what is historically known as ETL in the enterprise, to connect, transfer, transform and push data and content between the cloud services we are increasingly growing dependent on.

With more and more of our lives existing in the cloud and via mobile devices, the need to migrate data and content between services will only grow more urgent. While ETL has all the necessary tools to accomplish the job, the cloud democratized IT resources, and the same will occur to ETL, making these tools accessible by the masses.

There are quite a few ETL solutions, but I feel there are 3 solutions that are starting to make a migration towards an easier to understand and implement vision of ETL:

 

These providers are more robust, and provide much of the classic ETL tools the enterprise is used to, but also have the new emphasis on API driven services. But there are 10 new service providers I’m calling reciprocity platforms, that demonstrate the potential with offering very simple tasks, triggers and actions that can provide interaction between two or more API services:

I consider reciprocity an evolution of ETL, because of three significant approaches:

  • Simplicity - Simple, meaningful connections with transfer and tranformations that are meaningful to end users, not just a wide array of ETL building blocks an IT architect has to implement
  • API - Reciprocity platforms expose meaningful connections users have the cloud services they depend on. While you can still migrate from databases or file locations as with classic ETL, reciprocity platforms focus on APIs, while maintaining the value for end-users as well as the originating or target platforms
  • Value - Reciprocity focus on not just transmitting data and content, but identifying the value of the payload itself and the relationships, and emotions in play between users and the platforms they depend on

This new generation of ETL providers began the migration online with Yahoo Pipes. Which resonated with the alpha developers looking to harvest, migrate, merge, mashup and push data from RSS, XML, JSON and other popular API sources--except Yahoo lacked the simplicity necessary for wider audience appeal.

While I feel the 10 reciprocity providers isted above represent this new wave, there are six others incumbents trying to solve the same problem:

While studying the approach of these 20 reciprocity providers, it can be tough to identify a set of common identifiers to refer to the value created.  Each provider has their own approach and potentially identifying terminology. For my understanding, I wanted to try and establish a common way to describe how reciprocity providers are redefining ETL.  While imperfect, it will give me a common language to use, while also being a constant work in progress.

For most reciprocity providers, it starts with some ecompassing wrapper in the form of an assembly which describes the overall recipe, formula or wrapper that contains all the moving ETL parts.

Within this assembly, you can execute on workflows, usually in a single flow, but with some of the providers you can daisy chain together multiple (or endless) workflows to create a complex series of processes.

Each workflow has a defining trigger which determines the criteria that will start the workflow such as new RSS post or new tweet, and with each trigger comes a resulting action which is the target of the workflow, publishing the RSS post to a syndicated blog or adds the tweet to a Google Spreadsheet or Evernote, or any other combination of trigger and action a user desires.

Triggers and actions represent the emotional connections that are the underpinnings of ETL’s evolution into a more meaningful, reciprocation of value that is emerging in the clouds. These new providers are connecting to the classic lineup of ETL interfaces to get things done:

  • Databases
  • Files
  • Messaging
  • Web Service

While also providing the opportunity for development of open connectors to connect to any custom database, file, messages and web services. But these connectors are not described in boring IT terms, they are wrapped in the emotion and meaning derived from the cloud service--which could have different meanings for different users. This is where one part of the promise of reciprocity comes into play, by empowering average problem owners and every day users to define and execute against these types of API driven agreements.

All of these actions, tasks, formulas, jobs or other types of process require the ability to plan, execute and audit the processes, with providers offering:

  • Scheduling
  • History / Logging
  • Monitoring

With data being the lifeblood of much of these efforts, of course we will see “big data” specific tools as well:

  • Synchronization
  • Data Quality
  • Big Data
  • Analytics

While many reciprocity providers are offering interoperability between two specific services, moving data and resource from point a to b, others are bringing in classic ETL transformations:

  • Reformat
  • Aggregate
  • Sort
  • Dedupe
  • Filter
  • Partition
  • Merge
  • Join
  • Split
  • Convert

After the trigger and before the action, there is also an opportunity for other things to happen, with providers offering:

  • Push
  • Events

During trigger, action or transformation there are plenty of opportunities for custom scripting and transofrmations, with several approaches to custom programming:

  • Custom Scripts
  • JavaScript
  • Command Line
  • API

In some cases the reciprocity provider also provides a key value store allowing the storage of user specified data extracted from trigger or action connections or during the transformation process. Introducing a kind of memory store during the reciprocal cycle.

With the migration of critical resources, many of the leading providers are offering tools for testing the process before live execution:

  • Test
  • Debugger
  • Sandbox
  • Production

With any number of tasks or jobs in motion, users will need to understand whether the whole apparatus is working, with platforms offering tools for:

  • Performance
  • Monitoring
  • Optimization

While there are a couple providers offering completely open source solutions, there are also several providing OEM or white label solutions, which allow you to deploy a reciprocity platform for your partners, clients or other situations that would require it to branded in a custom way.

One area that will continue to push ETL into this new category of reciprocity providers is security. Connectors will often use OAuth, respecting a users already established relationship with platform either on the trigger or action sides, ensureing their existing relationship is upheld. Beyond this providers are offering SSL to provide secure transmissions, but in the near future we will see other layers emerge to keep agreements in tact, private and maintain the value of not just the payload but the relationships between platforms, users and reciprocity providers.

Even though reciprocity providers focus on the migration of resources in this new API driven, cloud-based world, several of them still offer dual solutions for deploying solutions in both environments:

  • Cloud
  • On-Premise

There is not one approach either in the cloud, or on premise that will work for everyone and all their needs. Some data will be perfectly find moving around the cloud, while others will require a more sensitive on-premise approach. It will be up to problem owners to decide.

Many of this new breed of providers are in beta and pricing  isn’t available. A handful have begun to apply cloud based pricing models, but most are still trying to understand the value of this new service and what market will bear. So far I’m seeing pricing based upon:

  • Seat
  • Assembly
  • Tasks
  • Connections
  • Extension
  • Sync
  • Support
  • Training

Much like IaaS, PaaS SaaS and now BaaS, reciprocity providers will have a lot of education and communication with end users before they’ll fully understand what they can charge for their services--forcing them to continue to define and differentiate themselves in 2013.

One of the most important evolutionary areas, I’m only seeing with one or two providers, is a marketplace where reciprocity platform users can browse and search for assemblies, connectors and tasks that are created by 3rd party providers for specific reciprocity approaches. A marketplace will prove to be how reciprocity platforms serve the long tail and niches that will exist within the next generation of ETL. Marketplaces will provide a way for developers to build solutions that meet specific needs, allowing them to monetize their skills and domain expertise, while also bringing in revenue to platform owners.

I understand this is all a lot of information. If you are still ready this, you most likely either already understand this space, or like me, feel it is an important area to understand and help educate people about. Just like with API service providers and BaaS, I will continue to write about my research here, while providing more refined materials as Github repos for each research area.

Let me know anything I'm missing or your opinions on the concept of API reciprocity.


75 Features From Across 31 BaaS Providers

I’m currently tracking on 31 backend as a service providers, in an effort to better understand how this new breed of platforms are helping developers build web and mobile apps. After looking at all the BaaS providers, there are 13 clear leaders:

Then there are another 18 other players, trying to play catch up in a space that is working hard to define itself in 2013:

My goal is to better understand what features are offered across these 31 BaaS providers. To accomplish this, I spent no more than an hour per provider looking through their sites and playing with their products to get at least a basic understanding of their offerings.

When looking for features I tried to standardize the best I could, but it is difficult when there are different approaches to the deployment of resources on each platform. I found about 75 distinct features being offered across the 31 BaaS providers. I’m sure there are other features, and vital details missing, but I wanted to start somewhere. Here is what I found, organized as best I could:

User Management

  • User
  • User Roles
  • LDAP

Content Management System (CMS)

Data

  • Table
  • Relational
  • Key Value
  • Browser
  • MySQL Connector
  • PostGres Connector
  • Oracle Connector
  • Caching
  • XML
  • CSV

File Management

  • Storage
  • Sync

Image & Photo Management

  • Storage
  • Gallery & Collections
  • Processing

Custom Code / Objects

Programmatic Interfaces

  • Web Service Connectors
  • REST API
  • Custom REST API
  • Query

Commerce

  • Product Catalog
  • Shopping Cart

Virtual Commerce

  • In-App Purchases
  • Custom Virtual Store 
  • Virtual Goods Management 
  • Currency Maintenance 
  • Virtual Economy Regulation

Other Monetization

  • Promotions
  • Subscriptions
  • Billing
  • Passbook

Ranking

  • Recomendations
  • Reviews
  • Ratings
  • Likes

Advertising

Communication

  • SMS
  • Email
  • Email Templates
  • Push Notification
  • Interactive Voice Response (IVR)
  • Messaging System

Calendar Events

Posts

Friends

Shared Links

Geo

  • Spatial
  • Location
  • Check-In
  • Places

Gaming

  • Players
  • Ranking
  • Scores
  • Boards
  • State

3rd Party Integration

  • Twitter
  • Facebook
  • Dropbox
  • Fitbit
  • Foursquare
  • Github
  • Instagram
  • LinkedIn
  • Meetup
  • Tumblr
  • Withings
  • Wordpress
  • Yammer
  • Twilio
  • Underscore
  • SendGrid
  • Moment
  • Mandrill
  • Mailgun
  • CrowdFlower
  • Google Places
  • Google Apps
  • Salesforce
  • SAP
  • Siebel
  • Wordpress

SSL

Availability

  • Performance
  • Scaling
  • Load Balance

Deployment

  • On-Premise
  • Virtual Private Cloud
  • Public Cloud

Environment

  • Sandbox
  • Production

Utility

  • Logging
  • Backups
  • Clients
  • Jobs

Analytics

These BaaS providers support a wide variety of mobile devices, platforms, frameworks in multiple languages:

Mobile Devices

  • iOS
  • Android
  • Windows
  • Blackberry

Reader Devices

  • Kindle

Mobile Platforms

  • PhoneGap
  • Trigger.io
  • Titanium

App Frameworks

  • ql.io

Automation

  • Temboo

Languages

  • JavaScript
  • Java
  • C#
  • PHP
  • Python
  • Ruby

There were many different ways the BaaS platforms provided support to its developers:

Support

  • Phone
  • Web
  • Chat
  • Dedicated Account
  • Dedicated Tech

I found 10 different ways that BaaS providers delivered pricing:

Pricing

  • API Calls
  • Push Notification
  • Bandwidth
  • Storage
  • Active Users
  • Analitics
  • Support
  • App
  • Synchronization
  • Features

Marketplace

You can view all 75 features at the BaaS Github Repository I setup. Let me know any that you feel are missing, and I’ll consider adding.

Next up, I will add the features into my BaaS tracking database and publish a breakdown of providers, with the features they offer. Letting people search and filter, and also open up to each BaaS provider to comment and submit additional features they offer.


If you think there is a link I should have listed here feel free to tweet it at me, or submit as a Github issue. Even though I do this full time, I'm still a one person show, and I miss quite a bit, and depend on my network to help me know what is going on.