Open Real878797 opened 2 years ago
<?xml version="1.0" encoding="UTF-8"?> New partnerships with Oracle, Microsoft Azure and Google Cloud are highlighting Informatica’s strategy to dominate the market for data management products by offering integrations that cut down the time and complexity of data migration, management and engineering tasks. The partnerships, announced this week at the annual Informatica World conference, boost the capabilities of the company’s Intelligent Data Management Cloud (IDMC) and are in sync with the company’s recent data management product releases and previous partnerships with cloud software and service providers including AWS, Databricks and Snowflake. At the conference, Informatica launched two new industry-focused IDMCs for financial services and healthcare firms, as well as new data engineering and MLOps tools. Informatica has been slowly executing its plan to go beyond the ETL (extract, transform and load) tools it has been known for. Crucial to this strategy are moves to deepen its partnerships with major cloud service providers so that all Informatica customers, regardless of their cloud vendor, can take advantage of the range of its offerings for migrating, planning, analyzing, engineering and governing data. The key focus seems to be to eliminate the need for multiple vendors or platforms for data management and analytics, at a time when siloed data and the complexity of managing modern applications are plaguing enterprises. Among this week’s announcements, the company said that it was extending its relationship with Microsoft to include a private preview of a SaaS (software as a service) version of Informatica’s Master Data Management (MDM) product on Azure, which will be available via the Azure Marketplace. The SaaS service is part of Informatica’s data management cloud, and the partnership is meant to enable joint customers to rapidly combine and rationalize hundreds of data sources for all their critical business operations, resulting in a repository of trusted data that can used to generate business insights, according to the company. In addition, the company has added new governance capabilities to IDMC on Azure, to allow users to apply management and security rules to the flow of data from disparate sources to Microsoft’s Power BI analytics software. This will enable Informatica to provide a complete view of data governance from data source to data consumption, the company said. The new capabilities follow a partnership with Microsoft for a cloud analytics program launched in November 2021 that, according to the companies, enables almost 90% automated data migration to Azure. Extending its partnership with Google Cloud, Informatica this week said that it was launching a new free cloud service, named Informatica Data Loader for Google BigQuery, in the form of a SaaS offering. The SaaS service, which the company claims is a “zero-code, zero-devops and zero-infrastructure” offering, is designed to allow enterprise users to generate insights faster because of its ability to ingest data from multiple source connecters to the Google data warehouse. The Data Loader can be accessed directly from the Google BigQuery console, providing access to all Google Cloud customers, Informatica and Google said in a joint statement. In October 2021, Informatica partnered with Google to offer a joint data migration program. Informatica has also added Oracle to its list of partners. The partnership between the two companies entails support for Informatica’s IDMC platform on Oracle Cloud Infrastructure (OCI), Oracle Autonomous Database, Oracle Exadata Database Service, Oracle Exadata Cloud@Customer, and Oracle Object Storage. IDMC, which will be made available in the Oracle Cloud Marketplace, is expected to help facilitate data platform modernization by easing the move of on-premises workloads to OCI, Informatica said, adding that the partnership will enable enterprises to gain insights at scale while leveraging their existing investments. The announcements this week follow similar moves last year. In September last year, Informatica announced an on-prem to cloud migration program with Snowflake, followed by a similar partnership with AWS in December. It also has a partnership with Databricks. In addition to the data migration programs, a key part of Informatica’s data management strategy for enterprises are features that allow disparate teams and departments within companies to share data sets. Last November, it released Cloud Data Marketplace, designed to allow employees within an organization to share ready-to-use data sets for use with AI and analytics models. This week at its conference, Informatica said that its annual recurring revenue for cloud offerings has grown 43% year-over-year as of March 31, with IDMC’s artificial intelligence engine, dubbed CLAIRE, processing over 32 trillion transactions on the cloud each month for the same period. The company, which counts Pepsi, Volvo, ADT, Telus, FreddieMac, Blue Cross Blue Shield of Kansas City and Hershey as its customers, expects CLAIRE to double the amount of transactions it processes over the next year. CIO is proud to launch the second CIO50 Awards in the Middle East, recognising the top 50 senior technology executives driving innovation, strengthening resiliency and influencing rapid change. Reflective of IDG’s increasing commitment to the region, CIO50 is aligned to a global awards program and viewed as a mark of excellence within the enterprise. In 2022, CIO50 will be judged on four core pillars ofInnovation, Diversity & Culture, Workplace and Data Intelligence, honouring transformational, inspiring and enduring CIOs at both in-country and regional levels across the Middle East. The role of technology leaders — whether CIO, CTO, CSO or CDO — continues to rapidly evolve, driven by the emergence of pioneering technologies and business models. CIO50 will capture this change through highlighting the innovative work of individuals and organisations across the region. Whether a small project or large company-wide initiative, entrants are encouraged to document the positive impact of technology and the business benefits of disruptive thinking. The CIO50 is open to the top technology leader within an organisation who has overall responsibility and control of the IT vision and direction of the company. This C-level executive provides innovation, leadership and resiliency within their organisation, while being at the forefront of decision-making and strategic change. Specifically, the CIO50 questionnaire seeks to determine: • the technology innovation/s that have changed the way an organisation operates. • why the innovation/s are unique in the marketplace. • the efforts to ensure diversity at the workplace. • how s/he collaborates and influences the organisation and its leadership team. • the role technology plays to help the organisation achieve its objectives. Nominations are now open and run until June 20. Submissions are free to enter and can be self-nominated or nominated on behalf of someone else, with all entries set for review by a select and independent CIO50 judging panel, who will rate each section of the questionnaire to determine the final list. After 25 CIOs and organisations are highlighted, the remaining 25 honourees recognised are listed alphabetically by company. Submissions will also form the basis of written profiles of all 50 technology leaders. This year will also see the introduction of three individual awards, acknowledging CIO excellence across the four pillars. The more powerful nominations will be the ones that can provide real-world examples of where technology and digital chiefs successfully provide value to their organisations, drive innovation and lead their teams. The word count for responses to questions under each of the four pillars should be no more than 800 words (2,400 words in total for the four pillars). To submit your candidature, click here, or contact Andrea Benito, Editor of CIO Middle East, via /em><em><u>andrea_benito@idg.com</u By Chet Kapoor, Chairman and CEO, DataStax Customers demand experiences that meet them at the speed of life. Think about an app that lets you know exactly when your latte will be ready or one that offers you alternate flight options as soon as you miss your connection. Today, we have the technology to build these experiences. But companies of all sizes still face the challenges of complexity and fragmentation. There seems to be a new database for every use case, and data is scattered across many technology silos. Developers who build the real-time experiences that customers love need a tech stack that lets them access data quickly and easily. Demand for apps isn’t going to slow down. IDC predicts that by 2023 there will be more than 500 million new cloud native digital apps and services – more than the total created over the past 40 years. So how can we simplify and accelerate the way developers build real-time apps? With an open data stack that just works. It is based on open source technologies, allows developers to use the tools of their choice, and converges “data at rest” and “data in motion.” Greg Sly, Verizon’s SVP of Infrastructure and Platform Services, recently said that the biggest challenge for any enterprise is organizing the data that has grown across the organization over the years. “Everyone has data lakes, data ponds – whatever you want to call them. They have all grown up organically within various business units. Now it’s about bringing that together. How do you get your arms around all the data you have?” Sly said. “You’ve got a lot of complexity with a lot of people with a lot of data that needs to get categorized, inventoried, and controlled. Then how do we leverage it to create better customer experiences?” Just as customers demand instantaneous, intuitive experiences, developers need to have all the data that matters at their fingertips. Any developer working on an app that matters – whether they’re at a startup or an enterprise – needs a technology stack that gets them across the finish line as quickly as possible. Open source projects bring together a bunch of really smart, diverse people who innovate like crazy. With contributors from different industries and geographies, the software becomes hardened and reliable across tons of use cases. And developers discover new ways of solving problems together – fast. We’ve seen this work beautifully at companies like Apple, Netflix and VerSe. Our own research shows that a common characteristic of successful data-driven enterprises is their commitment to open source. These enterprises find a wealth of community-driven innovation that’s constantly improving open-source software (OSS) and equipping developers with best-of-breed tools to build with. Think about Apache Cassandra®. The NoSQL database was built in 2007 by Facebook engineers who needed to find a scalable way to store and access massive amounts of data. Cassandra keeps improving thanks to the brilliant contributors from many different companies that continue to add to it. For developers, it’s all about ease of building and time to market. Let’s look at an example. Ankeri provides data services to companies that manage container ship fleets. They run a small development team that cannot afford to waste time with anything other than building a better experience for their users. “We are a start-up company, and as such need to be focused on features rather than infrastructure,” said Ankeri vice president of engineering Nanna Einarsdóttir. “The path from an idea to customer feedback must be short, and we need to be agile and forward-thinking.” Data APIs go a long way in simplifying and speeding up development. They help insulate developers from distractions, like learning how data is stored or wasting computing resources by installing and running databases locally. Most importantly, APIs allow them to plug in and swap out the tech of their choice. To make it even easier for developers to build, a data gateway can serve as an abstraction layer between an application and a database. Stargate is an open-source API gateway that does this, maintaining the ease of API management and leaving the updates to be handled by project maintainers. To build apps that meet customers at the speed of life, developers need a stack that supports all real-time data: At DataStax, we partner with developers and enterprises to deliver an open data stack that serves real-time applications. Cassandra plays a starring role. DataStax Astra DB, our database-as-a-service built on Cassandra, makes “data at rest” easy to use and build on. But the world is in constant motion, and streaming “data in motion” captures changes on the fly. Best-of-breed streaming and messaging technologies like Apache Pulsar enable real-time data to be acted upon as it’s generated (like when FedEx sends a notification to a buyer that their package has been delivered). In other words, it enables actions to become visible to all of an enterprise’s applications, in real-time. This is why DataStax Astra Streaming, which is built on Pulsar, is another key component of the open stack we deliver for enterprises and developers. Here’s a quick case study of how an open real-time stack can simplify development. Siggy.ai is a real-time recommendation app that integrates with Shopify, the e-commerce platform for online stores and retail point-of-sale systems. Chang Xiao, Siggy.ai’s founder and CEO, tried building a stack on his own using a combination of AWS and open source technologies. He began searching for an always-on solution that supported specialized operations while making development easy. Chang chose DataStax Astra DB, and within weeks, he went from struggling with unreliable databases and server issues to delivering in-the-moment, AI-powered recommendations to shoppers everywhere. At DataStax, we’re obsessed with knocking down barriers between data and developers by helping them to mobilize all real-time data, build smarter applications faster, and not worry about scaling. Let’s face it. When you give developers everything they need, they can focus on innovation and building, and the outcome is better apps and experiences that customers keep coming back for. Learn more about DataStax here. About Chet Kapoor: Chet is Chairman and CEO of DataStax. He is a proven leader and innovator in the tech industry with more than 20 years in leadership at innovative software and cloud companies, including Google, IBM, BEA Systems, WebMethods, and NeXT. As Chairman and CEO of Apigee, he led company-wide initiatives to build Apigee into a leading technology provider for digital business. Google (Apigee) is the cross-cloud API management platform that operates in a multi- and hybrid-cloud world. Chet successfully took Apigee public before the company was acquired by Google in 2016. Chet earned his B.S. in engineering from Arizona State University. Liberty Mutual is one of the most experienced and advanced cloud adopters in the nation. And that is in no small part thanks to the vision of James McGlennon, who in his role as CIO of Liberty Mutual for past 17 years has led the charge to the cloud, analytics, and AI with a budget north of $2 billion. Eight years ago, McGlennon hosted an off-site think tank with his staff and came up with a “technology manifesto document” that defined in those early days the importance of exploiting cloud-based services, becoming more agile, and instituting cultural changes to drive the company’s digital transformation. Today, Liberty Mutual, which has 45,000 employees across 29 countries, has a robust hybrid cloud infrastructure built primarily on Amazon Web Services but with specific uses of Microsoft Azure and, lesser so, Google Cloud Platform. Liberty Mutual’s cloud infrastructure runs an array of business applications and analytics dashboards that yield real-time insights and predictions, as well as machine learning models that streamline claims processing. As the Boston-based insurance company’s journey to the cloud has unfolded, it has also maintained a select set of datacenters from which to run legacy applications more economically than they would on the cloud, as well as software from vendors that make licensing on the cloud less attractive. And while McGlennon believes that will change over time, he is far more focused on technologies that will define the next generation of applications. “We’re really trying to understand the metaverse and what it might mean for us,” says McGlennon, whose mild Irish brogue bares his Galway, Ireland, upbringing. “We’re focused on augmented reality and virtual reality. We’re doing a lot on AI and machine learning and robotics. We’ve already built up blockchain and we’ll continue with all those.” And that ability to push the envelope, especially around machine learning and AI, finds its foundation in Liberty Mutual’s rich cloud capabilities. Despite his laser focus on embracing emerging technologies, McGlennon remains highly enthusiastic about Liberty Mutual’s use of and expertise in the cloud. Sixty percent of the insurer’s global workloads run in the cloud, delivering significant savings in hardware and software purchasing, but the big benefit comes in the form of business insights from analytics on the cloud that are immeasurable, he says. “The cloud has been a huge positive impact on us economically and surely you hear this story all the time, but it didn’t necessarily start out that way,” he says. “It tended to be additive to our legacy platforms when we started building out our cloud initially, but more recently, we’ve become far more mature in our use of the cloud and in our ability to optimize it to make sure that every single cycle of a CPU that we use out in the cloud is adding value.” Here, McGlennon says governing controls, instrumentation, and observability metrics are key. The CIO would not specify how much the multinational company has saved by deploying workloads to the cloud but estimated it has saved about 5% over the past two and a half years. “It’s a big number,” he says. Implementing cloud-native architectures for autoscaling and instrumenting Liberty Mutual’s applications to control how they’re performing have been crucial to realizing those savings, McGlennon says. Like many other early cloud adopters, Liberty Mutual deploys off-the-shelf tools such as Apptio to monitor costs and automate scaling depending on workloads, he says. “We’ve worked with our cloud partners to better instrument our applications and better understand how they’re performing,” says McGlennon, who was a finalist for the MIT Sloan CIO Leadership Award for 2022. “That gives us greater insight into where we are potentially wasting resources and where we can optimize — such as moving workloads to smaller cloud platforms.” McGlennon is proud of his team’s use of Apptio, for example, to best exploit its consumption-oriented model for not just its data on the cloud but for its internal services, software, and SaaS offerings, which, when linked to Liberty Mutual’s business portfolio, essentially provides the insurer’s partners with a bill of materials for all of the resources used. Over the past eight years, the Liberty Mutual IT team, which consists of 5,000 internal IT employees and about 5,000 outside contractors, has used a variety of development platforms and analytical tools as part of its cloud journey, spanning from IBM Rational and .NET in the early days to Java and tools such as New Relic, Datadog, and Splunk. Liberty Mutual’s data scientists employ Tableau and Python extensively to deploy models into production. To expedite this, the insurer’s technical team built an API pipeline, called Runway, that packages models and deploys them as Python, as opposed to requiring the company’s data scientists to go back and rebuild them in Java or another language, McGlennon says. “It’s really critical that we can deploy models quickly without having to rebuild them in another platform or language,” he adds. “And to be able to track the effectiveness of those machine learning models such that we can retrain them should the data sets change as they often do.” The insurer also uses Amazon Sage Maker to build machine learning models, but the core models are based on Python. Liberty Mutual’s IT team has also created a set of components called Cortex to enable its data scientists to instantiate the workstations they need to build a new model “so the data scientist doesn’t have to worry about how to build out the infrastructure to start the modeling process, “McGlennon says. With Cortex, Liberty Mutual’s data scientists can simply set their technical and data-set requirements, and a modeling workstation will be created on AWS with the right data and tools in an appropriately sized GPU environment, McGlennon explains. The insurer also deploys software bots in its claims model to enable customers to initiate a claim, e-mail a digitized photograph of their damaged vehicle, answer a few questions, and arrange a car rental quickly. On the back end, a machine learning model analyzes the photograph of the damaged vehicle to detect whether its airbag has been deployed, for instance, and to determine immediately whether a vehicle is totaled or the damage is limited to a fender bender. The insurer’s computer vision models may also tap into IoT devices and sensors deployed outside to generate more data for the claim. Liberty Mutual has come a long way from its technology manifesto to its advanced use of the cloud and AI, and embracing next-generation technologies such as augmented reality and blockchain will yield further advances, McGlennon notes. But this CIO is happy enough with the cloud and AI platform of today. “We’ve already seen significant economic payback from being able to use machine learning models to fine-tune quotes and pricing, in fraud detection, and our coding process to make it easier for customers to do business with us,” McGlennon says, pointing to advanced cloud applications’ benefits in its core business of processing claims. “We use it all over the place.” Although his is a property and casualty company, McGlennon believes CIOs must drive innovation and take risks “to create a culture where people feel there is the latitude to try something.” “Risk is our business,” McGlennon said during a panel at the MIT Sloan CIO Symposium this week, adding that CIOs need to show that when things go wrong, and sometimes they will, no one is going to be made to feel that the risk wasn’t worth it. “You have to incubate something, nurture it, give it support,” he said. Many of the leaders in the software industry come up from the ranks of working developers. They often want to expand into management as their mastery of technology gives them the confidence, but they don’t want to abandon the practice that frankly brings them fulfillment. Enter the coding leader. This new kind of leader is responsible for both strategy and being hands-on with the tech and walks in the worlds of business and technology with equal aptitude. By staying up to date with the practice of coding, these leaders maintain insight into the workings of the projects, stay on top of industry developments, and can perceive where changes can best benefit the organization. And this trend may help to address one of the most niggling problems in the software industry: the feeling among developers that they are saddled with poor managers. The coder’s daily work is often detailed, line-by-line stuff to be sure, and it can tend towards thinking about the trees more than the forest. The perennial danger for the engineer is in becoming obsessed with building things, losing sight of the business value of what they are doing. I think of this as the Bridge on the River Kwai blunder, where the character’s temporary technical task (the building of the bridge) comes to eclipse the much higher purpose (overcoming the imperial occupation). But as developers grow in their role, their vision encompasses more of the systems and processes at play, with understanding of the individual elements. As a skilled developer becomes really experienced, especially when their knowledge of the specific system under development becomes expansive, they are able to dip into high-value areas, assist with making changes, and maintain the high-level view. Adding to this an appreciation for the business side of things makes for a potent combination of talents. The mindset change that is required of coders here is to allow for a true balancing of priorities. While working developers may tend to see anything but actual coding as simply an interruption, successful coding leaders can hold the importance of both business and technical needs in mind—something akin to a work/life balance, where both have equal claim to attention. The coding leader knows how to keep a broad perspective that incorporates both the trees and the forest, how to shift between them, and, especially, how to allow the two to inform each other so insight flows between them. That includes, of course, the job of guiding the humans in the business. It’s such a hackneyed notion. It’s also somewhat true. Machines are logical and amenable to being coerced into doing exactly what you want by telling them in just the right way. People are not. There is something different in kind about leading people. As the programmer evolves from doing stuff, to leading other people doing stuff, to leading people leading people doing stuff, this distinction is magnified. Some folks just have a knack for people, how to elicit from them their needs, fears, and desires; how to perceive where the personality conflicts arise; how to see where they can grow; and how to effectively engage with these forces to help them and the business succeed. For the rest of us, these are learned, sometimes hard-learned, skills. Coders are no different. By acknowledging the importance of human interaction, the coding leader undertakes to gain insight and skill, just as they did when writing a for-loop or functional component was intimidating and foreign. The inner workings of the corporation are just as astounding as the internet. The beauty is that the coder has a vast advantage in leading other coders and tech personnel. Every programmer will recognize this scenario: The project manager saunters in and makes preposterous projections based on their Gantt chart. Or even more cringe-worthy, begins abusing buzzwords. To communicate the business needs to the builders in an effective way is a special art. To be an effective bridge between the two is even more precious. There’s no substitute for the actual experience of wrangling silicon into compliance. This translates not just into a deeper empathy for the technological work being done, but for the special joys bestowed upon and tolls exacted from people by the profession. There is a great deal of value to be found in keeping alive the knowing what it’s like to be in the trenches. The ability to put oneself into the shoes of the working coder is certainly a big piece of the puzzle in improving the perceived and actual performance of tech management. While researching and thinking about this issue of coding vs. managing, I happened to bring a car to the mechanic. The shop was a big operation, but I watched the owner walk out to a car and crawl under it to help diagnose a problem. There is a certain respect that comes from the engineers with a leader’s willingness and ability to jump into the thick of things. That kind of respect and fondness translates to the software world, where the leader is seen as “one of us.” In writing about his own experience as both coder and manager, Mark Porter, CTO of MongoDB, says “There are many types of CTOs. A CTO at a small company who is leading the development of the company’s first product should absolutely code. A CTO who is focused on outbound activities for a major firm should not.” This is a realistic acknowledgement that of course there are roles that demand the person filling it let go of hands-on coding, but there is also a place in the world for people who love coding, who want to continue being involved with it, and also grow into leadership. It’s not difficult to find even prominent leaders with deep hands-on technical knowledge these days. Werner Vogels of AWS and Brendan Eich of Brave, for example, give every indication of knowing and caring about the kinds of specifics that hands-on developers are concerned with. In the realm of technology tools this kind of expertise is even more valuable. Not only is the coding leader better able to relate with the in-house developers, but with the customers, as well. The coding leader demonstrates that a programmer is like a classical musician, rather than a football player or a fighter pilot. A classical musician may grow into a conductor who sustains their instrumental prowess to improve their work. When considering the weighty question of career paths, the notion that one must choose an either/or path forward of practicing coder or IT leader is becoming less concrete. It can perhaps be seen as a spectrum, instead of a disjunction. At one end is the pure business leader, at the other, the pure engineer. Most CIOs, CTOs, or other tech leaders will blend some of both aspects, with the coding leader falling more into the middle of the spectrum. As to the question, shall I be a manager or a coder? Maybe the answer is: both. With the Great Resignation showing no signs of letting up, recruiters are looking for all the help they can get to replenish their headcounts with qualified talent. The human resource management (HRM) market – including talent acquisition software and services – is currently valued at nearly $20 billion. It is expected to grow at a rate of over 12% annually until 2028 on the back of continued digitization and automation of recruiting and HR operations. Across the world, enterprises are putting an emphasis on creating and retaining the best, brightest, and most diverse employee pool. Expectedly, advances in artificial intelligence (AI), machine learning (ML), and predictive modeling are giving enterprises – as well as small/medium-sized businesses – a never-before opportunity to automate their recruitment even as they deal with radical changes in workplace practices involving remote and hybrid work. In fact, four out of every five recruiters surveyed in an Entelo study believe productivity would increase if they could automate candidate sourcing altogether. They were unanimously of the opinion that having more data would assist them in qualifying candidates, evaluating candidate pools, improving outreach, and perfecting hiring workflows. Despite this, 42% didn’t have the data or the time to implement or dig into analytics, let alone turn the data into insights. Enter recruiting automation solutions. Human resource or people management as a function begins with hiring. Every day an open role remains unfulfilled costs companies profit and productivity. Intelligent tools based on AI can gather relevant data on candidates, make it available to recruiters, and then process it accurately to speed up and streamline multiple sub-processes, including candidate sourcing, screening, diversity and inclusion, interviews, and applicant tracking. “The days of physically sorting through hundreds of resumes and posting your job descriptions on each individual board are over,” notes Ilit Raz, CEO of Joonko, a talent feed solution for surfacing candidates from underrepresented backgrounds. “Without some form of automation or HR tech, you’re always going to be a step behind your competitors, especially when it comes to recruitment.” Recruiting automation is a category of technology – delivered as software-as-a-service (SaaS) apps and increasingly powered by AI – that an organization can use to manage all aspects of its workforce. Its central aims include: How does a typical AI-based recruiting automation technology help you go about achieving these goals? Here are the different functions where it can play a key role: Despite the advances in recruitment automation software, it is not a panacea for hiring challenges. There is no technology cure for broken recruiting processes. Data overload is one critical problem. Recruiters have so much data (on candidates as well as job roles) these days that they have neither the time nor the skills to analyze it and arrive at the right decisions. Many times, the cost and complexity of accessing and verifying this data turns out to be prohibitive. Another long-standing problem is bias. While the recruiting process itself is frequently biased (owing in no small part to companies’ propensity to rely on employee referrals), the use of AI and automation in hiring can sometimes compound the problem. “If you don’t have a representative data set for any number of characteristics that you decide on, then of course you’re not going to be properly finding and evaluating applicants,” says Jelena Kovačević, IEEE Fellow and Dean of the NYU Tandon School of Engineering. “For example,” she continues, “if Black people were systematically excluded in the past, or if you had no women in the pipeline, and you create an algorithm based on that, there is no way the future will be properly predicted. If you hire only from Ivy League schools, then you really don’t know how an applicant from a lesser-known school will perform, so there are several layers of bias.” In an infamous instance, Amazon developed an AI-based recruiting tool that analyzed patterns in resumes received over a ten-year period and ended up discriminating against women. Needless to say, they scrapped it. The biggest area where data and AI have failed is Diversity, Equity, and Inclusion (DEI). Some of the biggest diversity-related mistakes in recruiting that are amplified by automation and machine learning are: The last one deserves special attention. While AI is certainly not a silver bullet for recruiting, it has come a far way since the Amazon fiasco. The Entelo study found that data-driven recruiting teams are already outperforming their peers. Further, 84% of recruiters are fairly confident in their ability to use AI and machine learning in their day-to-day workflow. The million-dollar question is: How can recruiting automation technology use AI algorithms in the hiring process without adding (and amplifying) human bias into the mix? The answer lies in establishing company-specific performance benchmarks, identifying key metrics to objectively measure the competency of candidates, and using talent analytics to measure the success and efficiency of your recruitment efforts. Algorithms that fulfill the purpose they’re built for frequently do so because the largest and widest datasets are available for them. It is your responsibility to collect these data points and feed them into your talent pipeline or recruiting automation software. The process is reversed on implementation – it is always a good idea to test the algorithm on a small (but diverse) pool of candidates and manually review its output before adopting it as the de-facto hiring solution for your organization. IT leaders are embracing multi-cloud to deliver on business demands for greater innovation, scalability, and transformational customer and employee experiences. Those who are already engaged in multi-cloud are maximizing flexibility but also providing insight into the challenges in executing those strategies. Foundry’s 2022 Cloud Computing Survey provides conclusive evidence that cloud is the default option for IT investment, and that multiple clouds for all their benefits come accompanied by a set of challenges for IT decision makers (ITDMs) to address. An overwhelming 72% of 850 surveyed ITDMs say their organizations are defaulting to cloud-based services when upgrading or purchasing new technical capabilities. But an even larger 92% say they have experienced significant challenges to implementing their cloud strategies. (Previous iterations of the survey, now in its ninth year, were released under the IDG Communications brand.) Regarding multi-cloud, 79% say they’ve experienced at least one significant downside to their migration. According to Foundry’s report, “The most common complaint is increased complexity (48%), followed by increased costs due to cloud management and security challenges (36%), and increased costs of training and hiring (34%). Larger companies, and companies with larger cloud budgets, are more likely to experience greater downsides.” The takeaway here is that there is no guarantee of success with multi-cloud adoption. The complexity ITDMs are experiencing now is likely only to increase in the future as organizations deal with new cloud offerings for 5G and edge applications, greater numbers of distributed workers accessing multi-cloud environments, and greater expectations from both customers and employees. It appears that many are still early on in their path to multi-cloud. Interestingly, the Foundry survey found that 74% say they use more than one public cloud provider, but only 18% classify themselves as multi-cloud. Meanwhile, 29% say they are deploying or already have deployed hybrid cloud. That indicates some confusion over what multi-cloud means. We’ll rely in this instance on VMware’s definition: “A multi-cloud is a cloud environment that includes more than one public cloud provider, regardless of whether it is hybrid or not.” The survey inconsistency likely tells us that the concept of multi-cloud is still a bit overwhelming to ITDMs living in fear of increased complexity. The good news is that those still early on their path to multi-cloud have the benefit of learning from their peers who went earlier and have dealt with challenges and overcome them. A separate survey finds that what matters most is consistency. In that poll, 91% of executives say they want to improve consistency across their public cloud environments. Satisfying that need for operational consistency across clouds is one of the main objectives behind VMware Universal Cloud, which aims to make the transition to private and public clouds as seamless and cost-efficient as possible. VMware Cloud Universal offers its customers the ability to select from program offerings that span multi-cloud infrastructure, cloud management, application modernization, and premier customer success capabilities. It unifies compute, network, and storage capabilities across public/private cloud infrastructures, management, and applications, utilizing familiar VMware technology and providing a single management control plane. That’s certainly one way to achieve consistency, whether you define success as hybrid cloud or multi-cloud. To learn more about how VMware Cloud Universal can simplify your evolving cloud environment, go to https://www.vmware.com/products/cloud-universal.html. The novelty of ecommerce and digital services has officially worn off. If the pandemic made us realize anything about the way our global marketplace works, it’s that online shopping and services are now an essential part of daily life. However, as web-based markets and platforms have grown, so have customer expectations. It’s not enough to simply offer an adequate online channel. Now it has to compete for your customers’ attention. Today’s consumers crave a seamless and effortless experience from the second they enter a webstore or open their banking app, and that experience includes logging in. Though customers are concerned about privacy and security, they are unwilling to sacrifice a seamless journey for unseen benefits. In 2022, digital CX is focused on making every part of the customer’s path smooth and fast: smoother product searches, speedy bank transfers and nearly invisible logins. Yet nothing halts an online shopper in their tracks like an account lockout or a forgotten password. Authentication problems are the most frustrating obstacles a customer will face online, and businesses must solve them if they want to increase their revenue and competitive edge. Here are five reasons why giving customers a good first login experience can drive revenue in 2022: What makes a great first impression in a digital experience? For most consumers, it’s convenience. If a product can’t be searched up, purchased and delivered right to their door with minimal fuss, consumers will happily turn to a competitor. And, while banks and other digital services don’t suffer the same flightiness in customers, their users will avoid channels that fail to meet their ease-of-use expectations. For example, some businesses wish to drive revenue through a mobile app in addition to their website or brick-and-mortar locations. However, getting customers to adopt an app that forces them to re-register or one that funnels them through a frustrating login process can prove difficult. According to a 2020 Experian report[1], 1 out of 3 customers will abandon their online shopping carts if it takes longer than 30 seconds to complete an online transaction. Imagine trying to get these customers to download and install an app, then re-login on the new channel. It’s simply not going to happen. The tough reality is that consumers have the upper hand when it comes to registration and login convenience in the digital marketplace. You might already have competitive prices, great customer service and a robust selection — but those hardly matter if customers get hung up on authentication. Contemporary consumers often shop across a wide range of online marketplaces, and they’ll be comparing your channels to your competitors’. If you think that a friction-filled login process can’t hurt your revenue, think again. One of the top reasons customers abandon their online shopping carts is that they simply don’t feel secure. According to a 2022 Baymard Institute survey[2], nearly 1 in 5 customers have walked away from a webstore because they didn’t trust it with their credit card information. Customers in 2022 are more wary than ever about handing over their financial information. Data breaches, account takeovers and fraud keep them paranoid — so you need to earn their trust. But the fear that their credit card information is at risk is not the only thing to worry about. Customers who feel like their digital identity is safe are more likely to spend time browsing, searching and selecting services and products they want to buy. In 2021, an Experian survey[3] revealed that 55% of consumers think of security as the most important aspect of their online experience. This suggests that roughly half of the customers who visit your digital channels believe that security is more important than your prices, your selection and your brand. Smart companies in 2022 will evaluate how they show customers that shopping with them is safer than ever. It’s not enough to scatter “Norton Verified” and TrustPilot logos across your header and footer. You need to offer more login options, more security features and more control. Back in 2017, a Visa survey[4] discovered that more than half (53%) of credit cardholders would switch banks if their current one doesn’t offer biometric authentication options in the future. We can’t say for certain if they switched — but five years later, many banks have heard this demand and answered it with on-device biometric logins. We’ve already used this term above, but “friction” — a catchall phrase for everything that slows down a customer journey — goes beyond payment options and navigation menus. The reality is that the concept of CX friction applies to logging in, too. User friendliness is nothing new in digital experience, yet many companies forget that authentication is an irrevocable part of the process. It’s an oversight that causes many customers to suffer, and many more will walk away unless companies start taking comprehensive CX, including logins, seriously. If customers are impeded by a high-friction login process, they might become frustrated and give up. According to research from Mastercard[5], a third of customers will simply walk away if they get locked out of their account. Similarly, a FIDO Alliance report[6] shows that 60% of consumers canceled transactions because they either could not remember their password or were being forced to create a new account and password to make the purchase. On the other hand, customers adore businesses that give them a more seamless experience. The ability to swap between different channels without re-registering, for example, is an innovation that some companies have begun to offer. Combined with passwordless methods of authentication like physical biometrics, customers can experience the most effortless login experience — one that keeps them moving toward your products and services instead of struggling with account lockouts. When authentication is fast and smooth, customers are more able to enjoy an uninterrupted online experience. With less frustration keeping them from transitioning from interest to purchase, they’re much more likely to browse, buy goods and services, and recommend your brand to others. Simultaneously, easier logins prevent you from losing a customer because they experience login difficulties. Instead, they’re focused on their personalized journey and the products you offer. Problems arise when otherwise happy customers are forced to create a new account with a username and password. Instead of quickly moving on to payment, they’re forced to include yet another credential combination in their browser. It’s enough to keep some customers from following through with a purchase altogether, and companies aware of this obstacle will keep the flow uninterrupted. Customers shouldn’t need to think twice about authentication — even if it’s their very first time. One advantage of passwordless authentication is that new customers don’t even need to create a password and username combination. Instead, they can simply enter their email address or phone number, quickly scan their fingerprint or face and move on to the next step with minimal hassle. Businesses that offer easier, simpler registration and login methods will see more successful transactions and fewer abandoned carts. The FIDO Alliance surveyed consumers[7] across a number of different regions and found that 60% believe retailers offering on-device authentication care more about their customer experience. That’s a considerable number of people who could feel like you’re doing more for them — and more for their digital journey — than competitors. Similarly, Experian reported in 2020[8] that 77% of people feel most secure when using physical biometrics, and another 62% believe it improves their customer experience when managing finances or payments online. Here’s the bottom line: authentication expectations are changing, and customers want the ability to log in with biometrics. That means zero passwords anywhere, and without knowledge-based credentials ever showing up in the process. But it shouldn’t end there. A complete passwordless solution must offer a full spectrum of login options that work for everyone, including those who are not able or ready to use biometrics. Magic links or time-based one-time passcodes (TOTPs) are passwordless methods that also eliminate your greatest risk: customer passwords. When companies upgrade to a customer-friendly, truly passwordless authentication service, it removes the burden of complicated registration requirements and tough-to-remember passwords. Passwordless authentication turns a process that used to take several stressful minutes into a one-touch operation. The evolution of customer logins is making the digital world more accessible for all of us. The more your company invests in a better initial login experience, the happier your customers will be. Loyal customers spend more and visit more frequently, and they’ll spread the word about your secure, effortless experience. Every visitor will know something is special and different about your digital channels. While digital experience specialists have spent years tweaking pixel-perfect website designs and button colors to maximize engagement, they often forget about the friction that comes from first-time authentication. The next competitive step in improving the digital experience is building better, easier and more seamless authentication. It’s no longer a question of whether companies should adopt more refined authentication — it’s when. Ready to say goodbye to passwords? Learn more about BindID today! In an organization’s pursuit of digital transformation and innovation, the onus largely falls on agile and development leaders, who must field requests from business leaders and manage expectations, all while speeding time to market. In performing this balancing act, leaders continue to be stifled by misalignment, which creates tremendous waste and inefficiency and leads to low employee morale. It’s for these reasons that Value Stream Management (VSM) has become increasingly prevalent. Through effective implementation of VSM, development leaders can establish portfolios, programs, and cross-functional teams that are better aligned with business objectives. In the process, they can make better funding decisions, help their teams execute more efficiently, and scale their agile implementations. The following sections outline the key steps leaders can take to successfully make the move to VSM. The successful implementation of VSM starts with defining value streams. In a software organization, a value stream may be comprised of products, teams, and tools. These definitions need to be based on a solid understanding of how your organization creates and delivers value, from when an initial investment idea arises to when customers receive value. This is a critical starting point and a vital effort to get right as it sets the foundation for planning, funding, and determining objectives and key results (OKRs). Once consistent value streams have been defined, a VSM platform ensures that business strategies can more easily be aligned with the actual delivery of work efforts. This starts with negotiations between leaders of business and delivery organizations, who have different incentives. For the business, it’s all about innovation and ROI on new products, services, features, and so on. For development, it’s about keeping the lights on with the least amount of time and money — all while meeting business demands. Through this negotiation, teams can gain buy-in and alignment, and outcomes can be defined for teams accordingly. When teams encounter challenges or need to weigh tradeoffs, such as balancing between innovation and architectural optimization, stakeholders can do so with shared, well-defined visibility into priorities. To make agile development as scalable as possible, it is vital to enable teams to have visibility into common goals, while giving each team the flexibility to work with their preferred tools and methodologies. VSM ensures that no matter what is in the toolchain, teams can aggregate all data and metrics. This enables insights based on a single source of truth — so all value stream participants and stakeholders can always understand what they need to deliver and how their work contributes to value delivery. Teams need to determine the optimal way to get work done, and then execute effectively. Now more than ever, this means providing transparency and visibility, which is essential in reducing friction and increasing trust. By having one source of truth, teams can more effectively adapt to shifts in business strategy or other changes, without having an adverse impact on other work. Delivery leaders need to work with their teams to measure success and iterate, so future work is more efficient and effective. Toward this end, it is essential to empower teams to gain access to meaningful data so they can observe challenges and gain insights for continuous improvement. In addition, it is also vital to communicate results and learnings up through the business, and to amplify success across teams. Through effective implementation of VSM tools and principles, teams can build true enterprise-scale value streams that enhance innovation, optimally balance resources, and fuel dramatic improvements in delivery. To learn more, be sure to read our eBook, “5 Steps to Value Stream Management.” Real-time app analytics firm Amplitude has launched a new Customer Data Platform (CDP), betting on an aggressive pricing strategy to undercut competition from vendors including Twilio, Salesforce, Adobe and Oracle. The pricing strategy is similar to the game plan Amplitude used against analytics rival MixPanel back in 2014. The company had offered a freemium model in contrast to MixPanel’s paid service, helping it not only rake up customers but also attract new investors. The new CDP, according to the Y-Combinator backed firm, will be a fraction of the price of Twilio Segment and will adhere to more simplified tiering. The CDP market is growing, and is forecast to reach $20.5 billion by 2027, according to a report from Research and Markets. Amplitude CDP has been made available to Amplitude customers in an early access program this week and will be generally available later this year, the company said, adding that the platform will be free of charge for customers streaming fewer than 10 million events per month. In this context, event streaming means interactions or transactions with customers that are sent to the CDP database and analytics engine. The new CDP platform comes with features such as a unified user interface, event streaming, audience syncing — a process that groups customers together for, for example, marketing and sales campaigns — and a developer toolkit. The features are designed to improve data quality, reduce costs and accelerate time to data insights, according to Amplitude. With the help of the unified user interface, enterprises can create a single taxonomy for all digital analytics use cases, to collect and capture consistent data in one UI, the company said, adding that this helps with data planning, governance and engineering. The platform also comes with an event streaming feature allowing any customer enterprise to federate data to marketing tech (martech), advertising, and data pipelines such as Amazon Kinesis and Google Pub/Sub through a no-code configurable UI in a Data Connections catalog. Data Connections is a tool that combines different technologies — such as APIs, SDKs, event streams, and audience exports — to bring customer data inside Amplitude’s analytics engine and CDP database. In order to reduce costs, the company said that it has built analytics natively within the CDP platform. This means that enterprises can gather, clean and plan data along with generating insights and taking actions as required to better customer experience. Amplitude, which claims that it has over 1,700 customers, including 26 out of the Fortune 100 companies, and more than 500 employees, will also face competition from the likes of Amperity CDP, Bloomreach Engagement, BlueConic and Treasure Data CDP. The company, which was founded by Spenser Saktes and Curtis Liu in 2012, has raised a total of $336 million with the latest Series F round closing at $150 million in 2021. When reimagining the IT estate for data-first business, the opportunity is ripe for companies to rearchitect for a more sustainable IT environment. The benefits? Drive cost efficiencies and make headway on meeting corporate environmental, social, and governance (ESG) targets, for starters. The corporate agenda has embraced sustainability, fueled by escalating environmental concerns, because it makes good business sense. Across the board, customers, investors, and employees want to align with companies that prioritize and execute a sustainability agenda, not just pay lip service to the latest corporate buzzword. Board members are holding executive leadership accountable to hit published targets and meet evolving regulatory requirements, potential customers are making sustainability goals a condition of contracts, and employees are gravitating to companies with strong ESG track records. A 451 Research report confirmed that 57% of enterprises deem efficiency and sustainability very important to competitive differentiation across all channels. As companies reimagine infrastructure and operations for digital-first business, it only makes sense for them to incorporate sustainability as part of their holistic transformation. In fact, companies that link digital and sustainable transformation are 2.5 times as likely to be among the next wave of strongest-performing businesses, compared with those that do not take the approach, according to data from the World Economic Forum. An Accenture report confirmed the correlation, finding that companies leading in both digital adoption and sustainable practices are nearly three times as likely as other companies to be among tomorrow’s strongest-performing businesses. “Customers in the midst of digital transformation are finding technology solutions that they can pair with sustainable transformation, because the two go hand-in-hand,” says John Frey, HPE’s chief technologist, Sustainable Transformation. “It may not drive them to many new behaviors, but it gives them additional support for initiatives that have sustainability benefits as well, such as power and thermal monitoring.” IT operations due for a sustainability makeover The current state of IT operations misses the mark on sustainability objectives, in part because IT has historically been evaluated on other metrics. For example, most IT organizations aim to hit the standard 99.999% uptime service-level agreement (SLA) and thus are inclined to overprovision to ensure redundancy and performance. The annual budget cycle also encourages organizations to buy and deploy equipment even if there’s not a direct need, to avoid forfeiting available IT dollars. “If your primary measurement is uptime and SLAs and you’re not asked to consider other metrics like power consumption or power usage effectiveness [PUE] ratings, you’re going to build an infrastructure that matches the ability to meet your goals,” Frey says. Consider that in the average data center, a quarter of the compute capacity is comatose, performing no useful work. In addition, 67% of enterprises overprovision on-premises storage to be at least 1.3x what they currently need; nonvirtualized environments have their own set of challenges, with research showing that they measure a mere 10% compute utilization rate. Overall, such inefficiencies can cause power, space, and cooling constraints, not to mention that they also add unnecessary energy, maintenance, and real estate costs. Given that technology infrastructure is typically one of the largest contributors — if not the largest contributor — to a company’s energy footprint, there is a significant impact if modifications are made. By 2025, IDC research shows, 90% of the Global 2000 will bring their sustainability mandates to the IT agenda, insisting on use of reusable materials in hardware supply chains, carbon neutrality targets for IT facilities, and lower energy use as prerequisites for doing business. By modernizing with an as-a-service platform such as HPE GreenLake, companies can address many of the key causes of inefficiencies in on-premises or data-center-located IT equipment through use of state-of-the-art equipment and technologies. As part of the onboarding process, HPE evaluates workloads and optimizes the hardware and infrastructure accordingly, with the aim of increasing server utilization, optimizing server refresh cycles, and rightsizing redundancy requirements. Moreover, the metering capabilities delivered as part of HPE GreenLake services enable businesses to match their workloads and capacity closely to consumption, eliminating wasteful overprovisioning without incurring the risks of underprovisioning. “By nature, HPE GreenLake is designed to reduce overprovisioning,” Frey says. “As we size HPE GreenLake, we’re looking at workloads and optimizing hardware so you can see significant space, power, carbon emission, and equipment reduction.” HPE GreenLake Central delivers real-time visibility and optimization insights to actively manage infrastructure utilization at a granular level through a single platform. In addition, the predictable, holistic cost model for HPE GreenLake also helps keep costs in check and ensures that only needed infrastructure is deployed, another way to boost sustainable business practices. To complete the picture, HPE’s Asset Lifecycle Services helps organizations contribute to a circular economy by retiring assets responsibly when they are no longer of service. Almost 90% of assets recovered by HPE Financial Services are given a second life. Moving forward As IT organizations push ahead with parallel sustainability and data-first business tracks, here are six best practices that can help ensure a successful transition: With the right technologies in place and change management practices, organizations can embark on data-first business transformation with an eye toward energy reduction and sustainability — all designed to carve out a lasting competitive edge. For more information, visit the HPE GreenLake “Learn More” page. When companies first start deploying artificial intelligence and building machine learning projects, the focus tends to be on theory. Is there a model that can provide the necessary results? How can it be built? How can it be trained? But the tools that data scientists use to create these proofs of concept often don’t translate well into production systems. As a result, it can take more than nine months on average to deploy an AI or ML solution, according to IDC data. “We call this ‘model velocity,’ how much time it takes from start to finish,” says IDC analyst Sriram Subramanian. This is where MLOps comes in. MLOps — machine learning operations — is a set of best practices, frameworks, and tools that help companies manage data, models, deployment, monitoring, and other aspects of taking a theoretical proof-of-concept AI system and putting it to work. “MLOps brings model velocity down to weeks — sometimes days,” says Subramanian. “Just like the average time to build an application is accelerated with DevOps, this is why you need MLOps.” By adopting MLOps, he says, companies can build more models, innovate faster, and address more use cases. “The value proposition is clear,” he says. IDC predicts that by 2024 60% of enterprises would have operationalized their ML workflows by using MLOps. And when companies were surveyed about the challenges of AI and ML adoption, the lack of MLOps was a major obstacle to AI and ML adoption, second only to cost, Subramanian says. Here we examine what MLOPs is, how it has evolved, and what organizations need to accomplish and keep in mind to make the most of this emerging methodology for operationalizing AI. When Eugenio Zuccarelli first started building machine learning projects several years ago, MLOps was just a set of best practices. Since then, Zuccarelli has worked on AI projects at several companies, including ones in healthcare and financial services, and he’s seen MLOps evolve over time to include tools and platforms. Today, MLOps offers a fairly robust framework for operationalizing AI, says Zuccarelli, who’s now innovation data scientist at CVS Health. By way of example, Zuccarelli points to a project he worked on previously to create an app that would predict adverse outcomes, such as hospital readmission or disease progression. “We were exploring data sets and models and talking with doctors to find out the features of the best models,” he says. “But to make these models actually useful we needed to bring them in front of actual users.” That meant creating a mobile app that was reliable, fast, and stable, with a machine learning system on the back end connected via API. “Without MLOps we would not have been able to ensure that,” he says. His team used the H2O MLOps platform and other tools to create a health dashboard for the model. “You don’t want the model to shift substantially,” he says. “And you don’t want to introduce bias. The health dashboard lets us understand if the system has shifted.” Using an MLOps platform also allowed for updates to production systems. “It’s very difficult to swap out a file without stopping the app from working,” Zuccarelli says. “MLOps tools can swap out a system even though it’s in production with minimal disruption to the system itself.” As MLOps platforms mature, they accelerate the entire model development process because companies don’t have to reinvent the wheel with every project, he says. And the data pipeline management functionality is also critical to operationalizing AI. “If we have multiple data sources that need to talk to each other, that’s where MLOps can come in,” he says. “You want all the data flowing into the ML models to be consistent and of high quality. Like they say, garbage in, garbage out. If the model has poor information, then the prediction will itself be poor.” But don’t think just because platforms and tools are becoming available that you can ignore the core principles of MLOps. Enterprises that are just starting to move to this discipline should keep in mind that at its core MLOps is about creating strong connections between data science and data engineering. “To ensure the success of an MLOps project, you need both data engineers and data scientists on the same team,” Zuccarelli says. Moreover, the tools necessary to protect against bias, to ensure transparency, to provide explainability, and to support ethics platforms — these tools are still being built, he says. “It definitely still needs a lot of work because it’s such a new field.” So, without a full turnkey solution to adopt, enterprises must be versed in all facets that make MLOps so effective at operationalizing AI. And this means developing expertise in a wide range of activities, says Meagan Gentry, national practice manager for the AI team at Insight, a Tempe-based technology consulting company. MLOps covers the full gamut from data collection, verification, and analysis, all the way to managing machine resources and tracking model performance. And the tools available to aid enterprises can be deployed on premises, in the cloud, or on the edge. They can be open source or proprietary. But mastering the technical aspects is only part of the equation. MLOps also borrows an agile methodology from DevOps, and the principle of iterative development, says Gentry. Moreover, as with any agile-related discipline, communication is crucial. “Communication in every role is critical,” she says. “Communication between the data scientist and the data engineer. Communication with the DevOps and with the larger IT team.” For companies just starting out, MLOps can be confusing. There are general principles, dozens of vendors, and even more open-source tool sets. “This is where the pitfalls come in,” says Helen Ristov, senior manager of enterprise architecture at Capgemini Americas. “A lot of this is in development. There isn’t a formal set of guidelines like what you’d see with DevOps. It’s a nascent technology and it takes time for guidelines and policies to catch up.” Ristov recommends that companies start their MLOps journeys with their data platforms. “Maybe they have data sets but they’re living in different locations, but they don’t have a cohesive environment,” she says. Companies don’t need to move all the data to a single platform, but there does need to be a way to bring in data from disparate data sources, she says, and this can vary based on application. Data lakes work well for companies doing a lot of analytics at high frequencies who are looking for low-cost storage, for example. MLOps platforms generally come with tools to build and manage data pipelines and keep track of different versions of training data but it’s not press and go, she says. Then there’s model creation, versioning, logging, weighing the feature sets and other aspects of managing the models themselves. “There is a substantial amount of coding that goes into this,” Ristov says, adding that setting up an MLOps platform can take months and that platform vendors still have a lot of work to do to when it comes to integration. “There’s so much development running in different directions,” she says. “There’s a lot of tools that are being developed, and the ecosystem is very big and people are just picking whatever they need. MLOps is at an adolescent stage. Most organizations are still figuring out optimal configurations.” The MLOps market is expected to grow to around $700 million by 2025, up from about $185 million in 2020, says IDC’s Subramanian. But that is probably a significant undercount, he says, because MLOps products are often bundled in with larger platforms. The true size of the market, he says, could be more than $2 billion by 2025. MLOps vendors tend to fall into three categories, starting with the big cloud providers, including AWS, Azure, and Google cloud, which provide MLOps capabilities as a service, Subramanian says. Then there are ML platform vendors such as DataRobot, Dataiku, and Iguazio. “The third category is what they used to call data management vendors,” he says. “The likes of Cloudera, SAS, and DataBricks. Their strength was data management capabilities and data operations and they expanded into ML capabilities and eventually into MLOps capabilities.” All three areas are exploding, Subramanian says, adding that what makes an MLOps vendor stand out is whether they can support both on-prem and cloud deployment models, whether they can implement trustworthy and responsible AI, whether they’re plug-and-play, and how easily they can scale. “That’s where differentiation comes in,” he says. According to a recent IDC survey, the lack of methods to implement responsible AI was one of the top three obstacles to AI and ML adoption, tied in second place with lack of MLOps itself. This is in large part because there are no alternatives to embracing MLOps, says Sumit Agarwal, AI and machine learning research analyst at Gartner. “The other approaches are manual,” he says. “So, really, there is no other option. If you want to scale, you need automation. You need traceability of your code, data, and models.” According to a recent Gartner survey, the average time it takes to take a model from proof of concept to production has dropped from nine to 7.3 months. “But 7.3 months is still high,” Agarwal says. “There’s a lot of opportunity for organizations to take advantage of MLOps.” MLOps also requires a cultural change on the part of a company’s AI team, says Amaresh Tripathy, global leader of analytics at Genpact. “The popular image of a data scientist is a mad scientist trying to find a needle in a haystack,” he says. “The data scientist is a discoverer and explorer — not a factory floor churning out widgets. But that’s what you need to do to actually scale it.” And companies often underestimate the amount of effort it will take, he says. “People have a better appreciation for software engineering,” he says. “There’s a lot of discipline about user experience, requirements. But somehow people don’t think that if I deploy a model I have to go through the same process. Then there’s the mistake assuming that all the data scientists who are good in a test environment will very naturally go and would be able to deploy it, or they can throw in a couple of IT colleagues and be able to do it. There’s a lack of appreciation for what it takes.” Companies also fail to understand that MLOps can cause ripple effects on other parts of the company, leading often to dramatic change. “You can put MLOps in a call center and the average response time will actually increase because the easy stuff is taken care of by the machine, by the AI, and the stuff that goes to the human actually takes longer because it’s more complex,” he says. “So you need to rethink what the work is going to be, and what people you require, and what the skill sets should be.” Today, he says, fewer than 5% of decisions in an organization are driven by algorithms, but that’s changing rapidly. “We anticipate that 20 to 25% of decisions will be driven by algorithms in the next five years. Every statistic we look at, we’re at an inflection point of rapid scaling up for AI.” And MLOps is the critical piece, he says. “One hundred percent,” he says. “Without that, you will not be able to do AI consistently. MLOps is the scaling catalyst of AI in the enterprise.” CIOs seeking to hire or retain skilled IT workers should continue to budget generously for payroll. Pay premiums for non-certified tech skills rose by the largest amount in 14 years in the first quarter of 2022, according to the latest edition of the IT Skills and Certifications Pay Index, compiled by Foote Partners. The quarterly index also covers pay premiums for IT certifications, which — in a bit of good news for cash-strapped CIOs — have resumed a long decline begun more than three years ago with their largest quarterly drop since late 2020. Still, the analyst firm stands by its mid-2020 pandemic prediction that “employers won’t be feeling a true sense of normalcy or find comfort in predicting their future until the fourth quarter, despite recent relieve in various restrictions.” The average premium paid for non-certified skills rose by 1.6% in the quarter, lifting the pay premium for one skill from around 9.35% to around 9.5% of base salary. To put things in perspective, that would mean an increase of around $150 on a salary of $100,000 versus the previous quarter, bringing the average premium back to where it was two years ago. The pay premium value of IT certifications, however, declined by 1.2% over the quarter (and 11% over the past three years), to around 6.5% of base salary. The size of the premiums paid varied widely by skill, Foote’s analysts found. In the first quarter of 2022, skills centered around “management, methodology, and process” were the most richly-rewarded, buoyed by demand for skills such as AIops, Azure Key Vault, big data analytics, complex event processing/event correlation, deep learning, DevSecOps, Google TensorFlow, MLOps, prescriptive analytics, PyTorch, Scaled Agile Framework (SAFe), security architecture and models, and site reliability engineering (SRE), almost all of which continued to grow in value. (Among them, only TensorFlow and SRE fell.) The only other skills as richly rewarded were smart contracts (commanding a bonus of a whopping 20% of base pay) and blockchain, both of which Foote classifies as “data/database” (the second-best-paying category overall), and Ethereum, which it lumps in with programming languages, the rest of the Azure stack, and a bunch of Apache projects under “applications development tools/platforms” (the third-best-paying category). It is not always the newest technologies that pay the best. Some of the fastest-rising pay premiums during the quarter were for knowledge of the venerable DB2 and Apache Sqoop, a command-line interface for loading Hadoop databases that ceased development last year. Other skills with fast-rising premiums included WebSphere MQ, Apache Ant, Azure Cosmos DB, DataRobot enterprise AI platform, Tibco BusinessWorks, RedHat OpenShift, Microsoft’s System Center Virtual Machine Manager and SharePoint Server, mobile operating systems, and a clutch of SAP technologies. Foote does not report on any SAP certifications, but among the 579 certifications it does report on, architecture, project management, process and information security certifications remain the most valuable, commanding a pay premium of just over 8%. They are in long-term decline, though, while over the past year or two database certifications have been rising fast to meet them (and are now offer a pay premium of over 7%). Cisco Certified Architect remains the most lucrative certification, followed by Certified ScrumMaster, Zachman Certified Enterprise Architect, and a bunch of cybersecurity qualifications: Certified Secure Software Lifecycle Professional (CSSLP), EC-Council Certified Encryption Specialist (ECES), GIAC Security Expert (GSE), and GIAC Security Leadership (GSLC). The fastest-gaining certifications included the Certificate of Cloud Security Knowledge, Certified Healthcare Information Security and Privacy Practitioner (ISC2), and vendor-specific certifications such as Juniper Networks Certified Professional (JNCIP), AWS Certified Data Analytics, IBM Certified Systems Administrator, Oracle Cloud Infrastructure Certified Architect Professional, and Tibco Certified Architect. Much as there was profit to be made selling pick-axes during the goldrush, there’s also money to be made in the certification process itself, with pay premiums rising fast for CompTIA Certified Technical Trainers and Microsoft Certified Trainers. Foote reminded CIOs that demand is not the only thing affecting the pay premium commanded by these skills: There may also be changes in supply, as more workers pick up the skills they see paying the biggest premiums or are encouraged by aggressive vendor marketing to pursue particular training programs. There is also a great deal of volatility in the rankings, making it difficult to base long-term career or IT strategy decisions on one quarter’s numbers. Over one-third of the noncertified skills surveyed changed in value over the quarter, 12.6% of them rising and 21% falling. The data and databases segment was the most volatile, with more than half the skills surveyed changing in value, 39.7% of them downward. As executive vice president and chief transformation and digital officer at Novant Health, Angela Yochem is responsible for all digital health operations for the not-for-profit healthcare system, including 24/7 clinical services, a device company, data science, an innovation lab, a business growth function that explores new revenue streams for healthcare, and more. Yochem recently added to those responsibilities when she took on the role of COO and general manager of Novant Health Enterprises, a new spinoff with ancillary ambulatory locations spanning 13 states. When we spoke for the Tech Whisperers podcast, we talked about those “answer the call” moments that shape and define a career. The 2021 CIO Hall of Fame inductee shared her inspiring journey from CIO to P&L leader. We also dug into her diverse accountabilities and ownership of classic IT, all things digital, new business growth, the digital health line of business and, most recently, distributed operations in adjacent businesses. After the show, she shared some more insights and advice from her personal and professional playbooks. What follows is that conversation, edited for space and clarity. Dan Roberts: Many CIOs are still battling the old narrative—you’ve called it an artificial construct—that IT is a cost center. As someone who’s now leading P&L and commercializing many of the exciting digital products and services that Novant Health has built, how have you been able to change that narrative? Angela Yochem: It’s important for business leaders to understand that there is a strong digital underpinning to all successful products and services. It’s impossible to separate the evolution of business from the advances in technology and our unprecedented access to data. This is true whether you’re in a digital business, selling a product and engaging with customers and getting paid through digital channels, or in a more traditional business. Nothing is designed, manufactured, and sold without a highly automated research process, highly automated digitized design process, highly automated sourcing and manufacturing process, highly automated distribution process, and so on. Not to mention the advanced insights and predictive modeling that should drive all major and minor decisions, as well as personalized engagement with stakeholders of all types, and so on. All aspects of a business are dependent upon and differentiated by the sophistication of the underpinning technical capability. This notion that the technology organization exists for some reason other than applying technical advance and support to business evolution is erroneous. It’s a mindset that persists in many organizations. What needs to be done to shift it? Many senior executives “grew up” in their careers at a time when technology was really doing little more than just tracking personnel files, allowing for basic taking of orders, shipping of goods, billing for services, and the like. They grew up at a time when what you did was try to squeeze that division into running with the smallest budget possible so that you can apply the real dollars to the innovations that were happening elsewhere. Today, the script has changed. Companies apply money to the digital capabilities that will differentiate their products and services. And that is something that must be factored into every business leader’s budget and projections for what their margins are going to look like. Yes, there may be an aggregation of technology costs inside of an IT cost center, but the decisions to spend are being made by the executive teams of these companies. For that reason, it’s important to have at least one member of the executive team who understands the technical underpinnings and what will need to be done as great ideas are floated. You would also hope that this person would provide opportunities that may not be reasonable to expect other business leaders to foresee as part of those executive team conversations for spend prioritization. It takes courage to have the hard conversations and make bold decisions at the enterprise level. And sometimes the tough calls can create cynicism and change resistance in our people. What’s your advice to leaders in these situations? It’s important that leaders in every domain space understand that the leadership of that domain is just one hat they wear. The other is a leader of the entire company. And the decisions that have to be made for the greater good will not always be in the best interests of growth for their domain. Sometimes the best decision for the greater good will require that one derails plans made within one’s own domain. And this is hard, because it means you have teams of very smart, driven, amazing people solving some of the world’s most difficult problems, full steam ahead and then a project stops. And it’s not because it’s not great work and it’s not because it’s not appropriate work. It’s because it is ever so slightly less of a priority than something else. And they fell just below the waterline. Delivering that message and keeping those smart, brilliant people engaged, energized and interested and pivoting to something that may be less interesting to them but is still extraordinarily important to the company—that deeply personal engagement is required from every leader. The larger organizations become, the more difficult that degree of engagement can be. It means you’re almost always being asked very difficult questions. You need to be very clear in your in your mind about the answers to those questions so that there’s clarity across the board. You’ve been very intentional about how you communicate and create value in your organization. A good example is the rebranding exercise you went through to develop the umbrella of DPS, or “Digital Products and Services.” How has that changed the conversation around technology at Novant Health? It’s been called rebranding by many, but I would suggest it is a more appropriate naming convention. What we would have historically called IT or IT services remains, we just don’t refer to them as IT. Our Digital Products and Services organization also contains teams focused on new business growth—identifying new streams of revenue that are adjacent to the services we traditionally provide. They do all the market analysis, analysis around operational feasibility and long-term financial viability, and all of the work and engagement with various subject matter experts. We have a digital health business inside of DPS, which includes all 24/7 virtual clinical services, with all of the operations of that business, including dozens of physicians, nurses, pharmacists, and advanced practitioners as well as operators. We have a device company that sells solutions directly to consumers and to corporate entities. The physicians responsible for clinical informatics, which is essentially the strategy for incorporating advanced tech into clinical environments, are also inside DPS. The expectation across the organization is one of collaboration and cross-functional consult. We expect those leaders who serve the traditional IT capabilities to participate in the broader digital conversation and delivery activity across all of these areas. So all of the groups live inside Digital Products and Services, which isn’t IT but contains IT, and that allows our IT team members to engage in a much broader context. With so many things coming at you, what is the thought process or lens you look through when evaluating new opportunities, whether it’s a new role at your company or a new industry? There are three things I consider. The first is mission. I’ve had the great privilege of working in a variety of industries, for a variety of companies. One of the things I’ve learned about myself is that if the company is trying to do something meaningful to make our communities better, if there’s a mission that supports growth of communities and health of communities in the broadest possible sense, those are the things that motivate me. I love to know my work is doing good in the world. The second thing is impact, and when I say impact, I mean personal impact. If any number of leaders could fill a role and keep the lights on, then that role is less interesting to me than a role that very few people could do well. Roles that require transformative leadership, deep digital expertise, a creative mindset and courage – these are the roles that are hard to fill, regardless of the associated title, be it CEO or CDO or COO. I want to make sure that my particular talents are needed to have an impact for the company and make a difference to the stakeholders. And then lastly is growth, which we spoke a little bit about in the podcast. I need to make sure that I am growing and learning while also contributing to the growth and evolution around me. Everyone concerned should be better for us having been engaged. I want to be a better version of myself, and I want the company and our businesses and our team members to be better off than they were when I joined. So, it’s a mutual growth opportunity that is important to me. For more lessons from Yochem’s leadership playbook, check out episode two of the Tech Whisperers podcast. “This initiative is not about PeaSoup as a company, marketing hype or promotion. It’s about the future, our children, and generations to come. Unless stopped, humanity’s destruction of nature will ultimately render the planet completely degraded. Climate change with extreme weather events is already occurring more frequently with great intensity. We are also radically approaching critical tipping points. We need to act now and bend the curve of carbon emissions. There is only one pathway forward and that involves making rapid and deep cuts in emissions – including at data centers and across the cloud computing industry that is absolutely fundamental and crucial for how we work and nearly everything we produce.” – Art Malinowski, Head of Marketing at PeaSoup Hosting Limited Disruptive by nature, PeaSoup was the first cloud company in Europe to go to market with a fully hyper-converged architecture, one of only five in the world at the time. Based on VMware technology, that foundation is today relied on by a rapidly growing customer base that includes leaders in the automotive, education, government, healthcare, manufacturing, and retail industries that demand the elasticity, utter reliability, and ironclad security of the PeaSoup cloud. We recently connected with PeaSoup’s Head of Marketing Art Malinowski to learn more about the company’s recently unveiled ECO Cloud Service, its aggressive emissions-related goals, and the growing demand for environmentally friendly IT solutions. We also took the opportunity to learn more about PeaSoup’s deployment of immersive liquid cooling technology at its state-of-the-art data center in Gatwick, U.K. “We offer a full array of public, private, and hybrid cloud services and solutions, including Infrastructure-as-a-Service, Backup-as-a-Service, and hyper-converged storage that removes around 40% of the risk of downtime over comparable offerings and increases performance,” says Malinowski. “On the most basic level we provide our customers with everything they need to realize the full potential of a truly virtual, software-defined datacenter that addresses their unique needs – all through an environment that can be controlled by the existing IT team through a full portal interface or with on-premises management tools for VMware environments.” Malinowski also notes that PeaSoup guides and supports customers at every step of their cloud journey, from planning their cloud infrastructure to data migration and fully managed cloud services. It’s work that he believes is inherently good for the environment. “Centralizing services into the cloud provides the opportunity to deliver efficiency,” says Malinowski. “Organizations of all sizes can take advantage of the investment in power efficiency and heat reuse that would not be commercially viable on an individual basis.” That is certainly the case with PeaSoup. In additional to being VMware Zero Carbon Committed, all of the company’s data centers and buildings will be powered by 100% renewable energy by 2025. But that’s just the start. PeaSoup is the only provider in the U.K. to use liquid immersion cooling technology to deliver a carbon-zero cloud – the company’s ECO Cloud service. PeaSoup embraces environmental friendly immersive cooling for cloud services. This technology increases performance by stabilising ambient CPU temperature and reduces data centre footprint and power consumption by a whopping 30%. “Our Zero Carbon efforts are very much focused on tangible elements where we can make a real difference and not offer lip service,” adds Malinowski. “As an organization we strive to develop our services with others of a similar mindset to reduce our impact on the environment – not just as an ‘offset,’ but as a true sustainable position where the whole chain of supply can be seen to benefit the environment.” The liquid immersion technology used in PeaSoup’s data center in Gatwick accomplishes that and cools servers by placing them in a biodegradable dielectric liquid. The cooling can be up 1,000 times more efficient than air conditioning. It’s a process that not only makes all components less susceptible to temperature changes, but also enables them to be placed much closer together – allowing PeaSoup to simultaneously increase the compute and storage capacity the facility delivers. The system also uses very little water. “Power is one of the most important factors in calculating a carbon footprint and Power Usage Efficiency, or PUE,” he says. “The dielectric biodegradable liquid used to cool our servers stands at a 1.03 score, while the average PUE for data centers that use air conditioning is around the 2.0 mark.” Despite such impressive gains, Malinowski stresses that PeaSoup has plans for much more. By the end of 2025, and as part of its work with local councils, the company will move to install thermal heating exchanges at the end of its closed-looped circulation system to help heat local and public buildings. “We offer the high-performance, secure, and utterly reliable cloud services and solutions the most demanding enterprises need for less than the competition, without ingress and egress fees and with personal, dedicated service that delivers peace of mind,” he says. “VMware has long been the backbone of our cloud infrastructure and with the VMware Zero Carbon Committed initiative we can improve our goals and raise awareness about sustainable cloud computing platforms while accelerating the implementation of alternative cooling methods that don’t require carbon offsetting. With our cloud we can help enterprises immediately lower their own carbon footprint.” Learn more about PeaSoup and its partnership with VMware here. By Milan Shetti, CEO Rocket Software Over the past two years, many business leaders received a crash course in managing a distributed workforce — whether they wanted to or not. While adjusting to fully remote work presented challenges for all, we at Rocket Software were lucky to have plenty of experience thanks to our global workforce, with centers of excellence strategically located throughout North America, Europe, Asia and Australia. It does require thoughtfulness and extra care to ensure every member of a global workforce feels included and supported by their employer and connected to both their colleagues and their organization’s overarching mission. And given any good leader understands that when their employees thrive, the company thrives, the longevity and success of a company truly depend on its people. Here are three considerations for leaders balancing a global workforce to grow and succeed amid the Great Resignation: The most important factor to keep in mind when fostering a great corporate culture across a global workforce is that no matter where a company’s headquarters are or where the CEO is located, it’s the employees in each and every region who are the heart of the organization. Prioritizing them is key to ensuring longstanding success. To learn more about Rocket and how we support our employees, visit https://www.rocketsoftware.com/rocket-jobs Developer portals are one of the newest ideas to gain the interest of the technology community. We’ve known for a long time that individuals and practitioner teams in companies are often the ones who drive software and infrastructure strategies in organizations. Especially with software development, the technologies that make development easiest and fastest are often the ones that win out, despite what centralized planning may start with. Rather than fight this bottoms-up enterprise strategy, clever organizations recognize that teams are the ones that will identify the best technologies to meet your business goals. These organizations shift from policing how teams do software to partnering with those teams to give them the tools that make their work better. So, how should you proactively plan for acquiring and putting in place a developer portal rather than discovering that your organization has amassed various tools, technologies and frameworks after each development team has decided on their own? What is a developer portal and how does it help? A developer portal catalogs all of the applications and services that are used in your organization. For example, the applications might be components of your consumer-facing online banking app, and the services might be back-end services your applications use, like executing money transfers. Let’s think of these applications and services as “projects.” For many years, developers have used wikis and wiki-like content management systems to host project documentation and project management information. What’s different about a developer portal is that it borrows from the “infrastructure as code” thinking in DevOps and makes those websites more programmable and livelier. For example, a developer portal will describe each project in the portal, even the APIs that other teams can use. Think of this as “read” access to those projects. But a developer portal adds in “write” access to the projects, as we’ll explain below. Overall developer portals help your teams in three important ways: Developers just want to write and get their code into production. In many environments today, starting a new project is like standing at the beginning of your organization’s software labyrinth. With a developer portal, your developer can shop a catalog where they can choose to launch an internal website, an API, an integration to your ERP, or whatever else you have predetermined as a pattern for your organization. By clicking on their selection, they will soon have a project established in source control with the standard dependencies, code layout, IAM integration, and metadata to build and deploy their application. Hours of meetings and discussions have been reduced to a short online shopping excursion. The wasted time of finding and starting to use APIs and shared code is removed so developers can start working on applications. Your developer is focusing on business value instead of scheduling meetings. Security is confident the developer is following known, approved, compliant patterns. Architects are comforted the developer is building toward the overall strategic vision. Operators are sleeping well knowing the developer is leveraging automation to deploy to a well-known infrastructure pattern. All applications will have their bad days. It may be slow performance or moments of being offline. How will your teams respond? A developer portal gives them a single place to see the latest information. What are the metrics? What do the logs show? Is the underlying Kubernetes platform operating as you expect? Your teams don’t have to spend precious time researching where the logs are kept, or which cluster the application is running on. By going to the portal, they will be a few clicks away from information to help them quickly respond and, if necessary, restore. Backstage is the leading example in the area. It was originally developed and open-sourced by Spotify to standardize the internal catalog, onboarding, and governance of applications and services. Spotify has thousands of microservices and, we think, they’d be much less agile if each of those microservices followed its own method of documentation, identifying the people responsible for that app, providing runtime analytics, and otherwise being an asset database. Now in its 1.0 release, Backstage is an incubating project at the Cloud Native Computing Foundation and is used by over a hundred organizations. We think that Backstage will likely become the favored developer portal for the Kubernetes world. And, indeed, we’re using Backstage in the VMware Tanzu Application Platform. Developer portals aren’t only for developers Clearly, developer portals make developers’ lives better. At a high-level, developer portals improve your organization’s developer experience: the ease and speed with which developers can do their daily jobs. This makes developers not only more productive but also happier. For management, that improvement to developers’ day-to-day lives has strategic outcomes. For example, fighting for talent has become a key part of any CIO job. Having an organization with top-notch developer experience will go a long way to attracting and maintaining developer talent. Enterprise architecture, operations, and security teams that are no longer seen as the bottleneck in the process, but instead as enablers will allow you to attract teams there. Developer portals help create an environment where those teams are helping developers do the right thing. Improved developer experience here means it’s easier to put your needed governance and security controls in place. Of course, this doesn’t just apply to developers. If used to its full potential and by all, a developer portal provides incredibly positive developer, operator, and security experience. While only one piece of the puzzle of improving how your organization does software, developer portals will accelerate you on your journey. To learn more, visit us here. La nouvelle compagnie aérienne canadienne se dirigera droit vers les nuages lors de son vol inaugural cet été, en exploitant l’intégralité de ses TI en mode virtuel. « Tout ici est infonuagique. Nous sommes complètement « cloud ». Je n’ai absolument rien sur place, nulle part, ni en colocation », dit Robert Pope, DSI de Canada Jetlines. Le décollage d’un nouveau transporteur de loisirs en pleine pandémie pourrait s’avérer périlleux. Mais Pope pense que l’infonuagique aidera Jetlines à faire face aux éventuelles turbulences du marché, en lui donnant une souplesse opérationnelle et en réduisant les dépenses liées à l’informatique classique. [ Read the English version: ] ServiceNow is using its Knowledge customer conference in Las Vegas to relaunch a feature of its latest software release that it feels did not get enough attention first time around: Procurement Service Management. This is fresh territory for the Now Platform, which has so far been used to automate workflows in IT (such as service management or operations management), HR, or order management. With the chaos surrounding supply chains these days, though, the procurement function is ready for some love. Procurement Service Management made its debut in the San Diego release of the Now platform, which became generally available on March 23. At the time, it was overshadowed by the new release’s robotic process automation (RPA) capabilities but now it is getting its turn in the spotlight. ServiceNow is aiming to help enterprises automate some of their procurement busywork, and to provide more monitoring of workflow performance, freeing up procurement teams to solve more pressing issues. “The great thing for IT organizations is that, just like they’ve been able to help reimagine employee experiences in areas like HR and workplace service delivery, now they’re going to be able to demonstrate that value for their finance and their procurement partners as well,” said Colby Blakeman, director of product management for ServiceNow’s procurement business unit The procurement automations are built on the same underlying platform as ServiceNow’s other workflow automations so, “Where CIOs have already made investments in learning the Now platform and building citizen developers, those same investments are going to benefit what they do within Procurement Service Management because we leverage that same infrastructure,” Colby said ahead of the Las Vegas event. The procurement workflows ServiceNow aims to automate are not the transactions at the heart of the procurement process, but the many tasks that surround them. “Our goal is to automate 100% of the low-value tactical tasks, status inquiries, requests for updates and other information so that procurement teams can focus on participating in the workflows that add value to the organization such as negotiation with the supplier in the sourcing process or working towards completion of various ESG initiatives,” he said. Colby gave the example of a request to expedite payment to a supplier: “We can structure workflows that allow the intake of that request, whether it’s an employee saying, ‘Hey, my supplier just asked for this,’ or a supplier saying that they need to be paid sooner because of a cash flow challenge.” Once the request has been captured, ServiceNow can verify the request’s validity, analyze whether it is justified, perhaps to negotiate better payment terms, collect the necessary approvals, and finally hand it off to the ERP system for payment. “That process typically lives in email, or outside of source-to-pay or ERP technology. ServiceNow is your bread and butter to take those unstructured processes and structure them with workflow,” he said. Although ServiceNow will provide some ready-made workflows for enterprises to adopt or adapt, workers can also use the Process Automation Designer, another part of the Now platform, to build a multi-step, guided workflow. “We’ve had customers go live in a few weeks. These aren’t month-long, year-long, multiyear-long implementations from an ERP perspective,” Colby said. ServiceNow is also hoping that, rather than running reports on the core ERP, procurement managers will turn to its software for insight into how their teams are performing. “We have a performance analytics capability in our platform, and so we’ve built native spend and work analytics for procurement teams that are accessible in different formats,” he said. Those analytics capabilities can help staff prioritize work queues and help managers balance workloads across teams. Colby also sees a role for ServiceNow in reducing maverick procurement by employees accustomed to making one-click purchases at home. Enterprises can put a service catalog of items approved for purchase on their employee portal or app, adding a form to collect information from employees about off-catalog purchases if they can’t see what they want, then build a workflow to route that request to a sourcing manager to help the employee get what they need, he said. With both Oracle and SAP having announced that they will end support for their legacy on-premises ERP systems in 2030, many enterprises are faced with a dilemma: Should they move to the cloud now then innovate on that clean base to respond to post-pandemic changes in the business environment, or should they innovate around their legacy system now, and swap out their ERP core later? ServiceNow is backing both horses, supporting connections to older on-premises ERP systems such as SAP ECC, as well as the newer generation of cloud-centric systems such as SAP S/4HANA. It has also teamed up with Celonis to look for processes it can optimize, and acquired Gekkobrain to help it understand how customers’ systems have been customized and move that customization to ServiceNow, said Kirsten Loegering, the company’s vice president of product management for ERP solutions. “Whatever the customer decides, whether they want to move forward with a legacy modernization and leverage Celonis and Gekkobrain to uncover the potential and then leverage ServiceNow in the context of their modernization, or whether they want to stay on their ECC system and wrap something around it for the time being to modernize the experience for the user, we can offer both,” Loegering said. It is beginning to look as though ServiceNow CEO Bill McDermott, previously CEO of SAP, is on a mission to replace ERP systems from the outside in. There’s never been more pressure on IT to deliver technology-enabled change to the business. Yet the volume of demands placed on IT can make it challenging for IT leaders to concentrate IT resources on the right efforts. Here, agility is essential, and smart IT leaders are doubling down on efforts to streamline IT, whether that involves reprioritizing projects and realigning the IT portfolio, rationalizing applications and pursuing cloud-native approaches, increasing automation through DevOps or AIOps adoption, or overhauling the structure of IT operations. CIO.com talked with four IT leaders about what’s driving their efforts to streamline IT for greater agility, how they’re doing it, what the biggest challenges are, and the tricks they’ve uncovered for doing it well. “Focus makes it easier for employees to perform at a higher level,” Chiranjoy Das, CIO at IT services firm Randall-Reilly, wrote in a LinkedIn post earlier this year describing how he had been “ruthlessly canceling projects to increase the organization’s focus.” But that’s just one way Das is recommitting his technology organization to become more agile and product-driven. Das’s biggest priorities include integrating Randall-Reilly IT with client systems to become more essential to their businesses, rearchitecting old systems using modern approaches such as microservices, implementing AI and machine learning to automate manual processes, and delivering clean, standardized data for analytics and monitoring. Compounding the challenges is an immense pressure for speed and agility from IT: “We are, as a matter of fact, having to go-live with less-than-perfect features in order to be agile and meet customers’ demands,” says Das, noting that he’s reduced sprints from two weeks to one. “This forces the team to be on their toes, and in turn puts lots of pressure on the Scrum teams because we cannot compromise security.” There’s only so much that can be squeezed out of existing resources, which is why Das began reassessing the projects on his team’s plate. Among the issues that led to an overfull pipeline included department heads pushing for pet projects that may not have strategic benefits for enterprise at large, the classification of too many big projects with iffy ROI as urgent, and what Das calls “big, shiny ideas,” like crypto projects that have not been properly vetted. As delivery — and his burned-out team members — suffered under the weight of expectations, Das started to go after some of the big issues, including lack of business cases for certain projects, no executive sponsors or projects owners for others, and the lack of time available to address technical debt. He has scaled back the total number of projects IT is taking on and has been advocating for a shift to “product-centric” software development, meaning no work will be done unless a product owner prioritizes it based on what stakeholders want. In addition to weekly sprints, IT has also implemented CI/CD and has automated regression testing, which has boosted IT agility and quality. Since Das began making these changes, the mood in IT has lifted. Talent retention has increased, meetings have decreased, and IT is working on a greater number of strategic projects aligned with business goals. Company leaders appreciate that IT took a firm stand, says Das. “But they are watching us to see if we can deliver on the ones that we have promised,” he adds. “Higher accountability but lesser projects help us concentrate better.” Leadership advice: Educating the business on why IT is reducing its slate of projects — to deliver more consistently — is critical, Das says. “Most projects do not add value,” he says, and the onus needs to shift to business leaders to prove ROI before sending projects to IT. Also, an agile mentality and culture are far more important than an agile development methodology, Das says. IT leaders must create a sense of urgency before jumping headlong into things like CI/CD. Other key elements for agility include establishing a data warehouse, APIs, proper security, and scalable architecture. “Without the foundational blocks, IT cannot be agile,” Das says. Empowerment is the name of the game at Ricoh USA. To align with a business strategy focused on customer centricity and innovation, the technology group must make quick decisions and adjust to changing business conditions. “It can’t be a free-for-all with everyone doing their own thing in silos,” says Bob Lamendola, senior vice president of technology and head of Ricoh USA’s digital services center. “But it also can’t be monolithic.” That has led to three key strategic priorities for IT at Ricoh: micro automation, adopting an integration framework, and enabling citizen developers. In fact, the technology function is now organized around automation, integration, and analytics. By focusing on incremental improvements in business processes with micro automation, IT has been able to rack up quick wins against efficiency objectives. “We continue to have long-term large-scale projects, focused on broader transformation changes, but by focusing on the smaller wins, we’re embracing agility and demonstrating to the organization that we’re listening to their immediate needs while also working toward longer-term goals,” Lamendola says. Recognizing the need to support a complex, hybrid cloud infrastructure for the foreseeable future, Lamendola’s team has developed a flexible integration framework that enables easier interconnectivity while maintaining security controls. An API layer allows Ricoh’s third-party partners to connect with the company’s ERP and other partner services to create a better user experience with minimal operational support. “We recognize the need to be agile with the ability to change partners or components of our hybrid model as required,” Lamendola explains. “By placing our integration framework at the core of our architecture, it becomes much less of a heavy lift each time we want to incorporate a new solution into our organization.” IT has also made investments in data aggregation, engineering, and analytics to unleash the power of citizen developers, who can use a framework of centrally developed rules and business processes to transform data into information that drives actions. “They are empowered to build their own dashboards and models to accomplish their teams’ goals,” says Lamendola. A data analytics community also promotes idea sharing, celebration of achievements, and consistency. “The authentic nature of this community combined with the structured approach to data aggregation and accessibility have provided a balanced level of control at a functional and corporate level,” he says. It’s been a significant cultural shift that has required IT to get comfortable with being uncomfortable. Traditional jobs are transforming and new mindsets are required. “We’ve had successes and we’ve made sure we ‘hyper-celebrated’ these wins,” says Lamendola. “Long-term we’ll focus on productivity, but it’s essential we get the culture piece right from the start to unlock the potential.” Leadership advice: If you build it, change will come. “We’ve been keenly focused on the foundation and less about the end product. The end product is important, but it doesn’t move us forward,” says Lamendola. “We’re on a journey and still mid-transformation, but we’re working to leverage past investments to accelerate the future foundational strategy.” Also, it’s important to know when to ease up on the controls. “Yes, policy and controls must exist,” Lamendola says, “but you have to also embrace a sense of flexibility to allow agility to happen.” Content services provider Hyland is undergoing a major systems overhaul to fuel company growth and create its next generation of platform-based products. This has also placed intense pressure on IT to deliver continuous value, says CIO Steve Watt. “The business can no longer wait for all projects to be Big Bang delivery,” he says. “They expect value delivered continuously throughout project lifecycles.” For Watt, streamlining means removing barriers to employee performance. Here, automation is critical, enabling teams to self-serve both within and outside IT with a focus on outcomes. “By having the right resources engaged directly and focused on those outcomes, teams become a tightly functioning unit with a narrow vision of what success looks like,” he says. That’s why Watt is restructuring IT into product- or process-aligned teams. Each unit has its own reporting structure incorporating staff from a range of disciplines, including solution and platform engineers, product owners, agile process managers, and IT staff skilled with infrastructure, application development, and integration. For example, one team is focused entirely on Hyland’s “quote to order” process, with the work done continuously using an agile framework to execute against an ongoing backlog of features, improvements, and fixes. Since the restructuring, there’s greater focus and less time wasted prioritizing or rationalizing work that may never be done. “Plus, we’ve been better aligned with the business, and have seen faster implementations and less rework,” says Watt. Ongoing communication with the business around this new way of working has been essential. “We’ve sought — and gotten — buy-in that some things need to be cut out to focus on what is important and to ensure we are focused on the right work with the limited resources we have,” Watt explains. Leadership advice: IT can still lose time and focus if the business is not prepared to participate at the right times and on a regular cadence. “Make sure your business stakeholders are up to speed on what agile really means and how your execution in that framework will function and where they fit into that process,” says Watt. For the IT team at Inteleos, a nonprofit medical certification organization, the pressure to increase speed and agility is self-imposed, says CIO Juan Sanchez. Implementation of an integration platform as a service (IPaaS) is at the top of the IT agenda, along with reducing technical debt through core platform refreshes, building a data science capability for internal and external customers, and creating zero-touch employee onboarding and provisioning. “We can drive big impact if we take advantage of these tools and opportunities,” Sanchez says, adding that “the ultimate way forward in streamlining IT is to leverage workflow automation.” By leveraging mature SaaS platforms with complete APIs, development teams at Inteleos can focus on the key operations of a workflow instead of the underlying details of code and the never-ending refactoring work. Infrastructure teams can also focus on building out workflows that deliver direct value to the business. By streamlining and automating the account and app provisioning workflow instead of keeping a domain controller operating, for example, IT can deliver a much better onboarding experience to new hires. “If done right, technology teams will build systems more like a logistics system, connecting SaaS nodes by building the most efficient routes between them,” says Sanchez. Currently, many business processes at Inteleos require manual intervention. Change requests are often focused on making the process better for staff but not necessarily for the customer. Sanchez hopes that when business process owners understand that that they can automate repetitive tasks, it will unlock their ability to think about a process in a more integrated way with a greater focus on whom the process benefits and how. “We’re looking at our operation as an elastic system,” says Sanchez. “We try to identify where bottlenecks are happening in our delivery of value and think about where and how we should scale different parts of that system.” Inteleos is also embracing the principle of “Goldilocks IT” — building just the right amount of technology, no more, no less. Building new technology should almost be an option of last resort, according to Sanchez. “While it seems counterintuitive, each new technology solution has immediate and long-lasting costs,” he says. “We have to be careful not to jump in too fast to solve everything with technology first.” So far, KPIs improvied and traditional IT service metrics like cycle times and SLA breaches have decreased. Through use of better architectural approaches in API designs, the development team has exceeded its concurrent request performance target by 250%. It’s not easy to shift IT’s image from black-box transactional function to business partner. But increasing dialogue between IT and its stakeholders is moving the needle at Inteleos. “We are having more strategic impact and helping guide the organization at a much more macro level than before,” Sanchez says. “Being careful to approach the dialog as a collaboration instead of a prescription is important and it’s a skill that most technology teams need to develop.” Leadership advice: Be willing to get creative with regard to talent. “Without this ingredient, the best architectures in the world will fail,” says Sanchez. “We’ve had to learn quickly where to find people, at what skill level, and in what type of engagement works best for what the system is requiring at any given time.” Sanchez also agrees that, above all, agility is a mindset. “It has to be in the minds of the technology team first and then have that mindset permeate how we interact with the business. Through those interactions you will then see agility manifest in the business,” says Sanchez. “If as a team we can’t conceive ideas beyond what’s in front of us, we’re destined to stay transactional.”Informatica eyes data management dominance
Faster insights on Google Cloud
Oracle joins list of Informatica partners
Criteria
Nominations
Complexity sucks (the life out of innovation)
Open source fosters innovation
Developers need the right tools
Converge data at rest and data in motion
Siggy.ai: From struggle to success
Give developers what they need
The benefits of a solid cloud foundation
The payoff of AI
Myth: Programmers can’t be good leaders
Myth: Coders are bad with people
Coding leaders are “one of us”
Should the leader keep coding?
What is recruiting automation and how can it help?
When can recruiting automation go wrong?
AI as the problem, analytics as the cure
#1 – Customers don’t hang around when logging in takes too long
#2 – Making customers feel safe encourages them to complete transactions
#3 – Logging in is part of a wider CX journey — and every step impacts revenue
#4 – Effortless logins eliminate friction between browsing and buying
#5 – Keeping up with their login expectations shows that you care about your customers
Step 1: Identify value streams
Step 2: Align with the business
Step 3: Rally your teams
Step 4: Execute with confidence
Step 5: Evolve and improve
Conclusion
CDP accelerates time to data insights
The evolution of MLOps
MLOps fundamentals: A moving target
Making sense of the MLOps landscape
Making the cultural shift to MLOps
Money-making methodology
Certified profits
Leveraging existing investment
Legacy or modernity
Saying no to give IT the gift of focus
Empowering the business with integration frameworks and micro automation
Breaking down barriers: Reorganizing around product and process
Just-right technology: Embracing workflow automation
0