check_g1255870711.jpg

Do you meet all the modern authentication requirements?

Microsoft’s plan to improve cloud security could mean problems for incompatible on-premises systems and applications — and the users who rely on them.

IT administrators get constant reminders that security is one of their top priorities at every layer within the infrastructure. This responsibility requires them to take the steps to secure their environments, which also extends to the Office 365 tenant. Microsoft will turn off basic authentication support in Office 365 on Oct. 1 and make modern authentication mandatory to use the collaboration platform. This change might cause issues for admins who have not fully evaluated their organization’s infrastructure and prepared for the updated authorization and authentication protocols.

A switch to modern authentication is easy but preparation is needed

A change to modern authentication on the Office 365 tenant is easy to implement and far more secure.  IT administrators can implement modern authentication organization-wide with a simple PowerShell command or via the web admin portal.  But once the change is made, any authentication attempt from a Microsoft Office app or third-party product that connects to Office 365 will stop and cause significant disruption to the end users.

Some of the typical issues users will experience after the move to modern authentication include trouble connecting to email from some of the legacy version of Outlook, including clients older than Outlook 2013 in Windows and other legacy versions in MacOS. To meet modern authentication requirements on these systems, Microsoft recommends a change to a version above Outlook 2013, but IT cannot always upgrade all the Office apps. These issues are not limited to just legacy versions of Outlook but are found in other Microsoft Office products, such as Word, Excel, PowerPoint and Microsoft Teams.

Meeting modern authentication requirements might require heavy lifting

Many IT administrators have numerous issues to deal with that might limit their ability to perform the necessary upgrades for all their end users. These issues can include:

  • lack of device management tools to push out Office 365 upgrades to all the users;
  • the use of third-party add-ons to the Microsoft Office suite and lack of compatibility with the newer versions of Microsoft Office apps;
  • users who work remotely that make it more difficult to access and update those machines;
  • hardware that may not meet the minimum requirements for the new Microsoft Office suite; and
  • Microsoft sign-in popup from modern authentication could get blocked by web filters, preventing users from seeing the login prompt.

One way to overcome the lack of device management to push out the latest upgrade of Office is to use the Office Deployment Tool (ODT). This is a command-line utility that downloads and deploys Microsoft 365 Apps to Windows client machines. ODT gives administrators more control over new Office apps installations. Not only does ODT assist with the installation, but admins can also use it to deploy specific tools and languages to machines without user interaction. The tool is available for downloading at this link.

To work with modern authentication, other tools and application will require updates. Some third-party email apps will need an upgrade to the latest supported version that supports modern authentication; IT administrators will need to consult their software vendor to keep email working. However, not every application will meet the modern authentication requirements, regardless of version. If the switch to the Office 365 tenant is made, then the connectivity for these apps to email servers will be broken. MacOS Mail (10.14) or older versions face the same challenge, but an upgrade to newer versions will support modern authentication.

What are the prerequisites for modern authentication in hybrid environments?

For organizations in a hybrid environment that host some of Microsoft services on premises such as Exchange Server and Skype for Business, it is highly recommended to update or upgrade those servers to the latest versions or patch level that support modern authentication. For Microsoft’s email platform, this includes using Exchange Server 2013 CU19 and up, Exchange Server 2016 CU8 and up and Exchange Server 2019 CU1 and up.

If the organization uses Active Directory Federation Services for SSO or other authentication needs, then IT must have Windows 2012 R2 AD FS 3.0 and above for federation. For users on Skype for Business Server, one requirement is to have at least the May 2017 cumulative update (CU5) for Skype for Business Server 2015 or later. For the hybrid setup, the following requirements must be met to support the integration of modern authentication with Exchange Online and other Office 365 services:

  • a Skype for Business Server 2019 deployment with servers that run Skype for Business Server 2019;
  • a Skype for Business Server 2015 deployment with servers that run Skype for Business Server 2015;
  • a deployment with a maximum of two different server versions for Skype for Business Server 2015 or Skype for Business Server 2019;
  • all Skype for Business servers must have the latest cumulative updates installed; and
  • there is no Lync Server 2010 or 2013 in the hybrid environment.

How to prepare for the Microsoft modern authentication deadline

Given the risk associated with the move to modern authentication, administrators will need an inventory of the systems that interact with Office 365 services. As part of this plan, administrators must outline where upgrades will be needed and any additional changes to meet modern authentication requirements, such as OS upgrades or replacement of apps that will not work with the updated security protocols.

Failing to get ahead of the looming deadline will cause issues with business email and communications that many companies rely on for their day-to-day business activities.

Google-location.jpg

Google to delete Location History data from places it should never have been tracking

Google has vowed to delete users’ visits to medical facilities from their Location History, which begs the question why was the company ever doing this in the first place?

In a blog post late last week, the mapping giant said that places like abortion clinics and addiction treatment facilities will be automatically removed from Location History ‘soon after they visit’.

The vow comes following the United States Supreme Court’s decision to overturn ‘Roe vs. Wade’, a landmark legal decision that had previously guaranteed the right to adoption at a national level in the United States. The power to allow (or not allow) abortions has now been handed back to the individual states, with many already putting bans in place.

The court’s deeply contentious decision has led to an awareness campaign for women to delete data from period tracking apps, for example, due to the threat of prosecution in some states for people who have abortions and healthcare workers who provide access to them.

Now Google is saying that, if Location History is turned on (it’s off by default), then these sensitive locations will be deleted.

“Some of the places people visit — including medical facilities like counseling centres, domestic violence shelters, abortion clinics, fertility centres, addiction treatment facilities, weight loss clinics, cosmetic surgery clinics, and others — can be particularly personal,” the company says in a blog post.

“Today, we’re announcing that if our systems identify that someone has visited one of these places, we will delete these entries from Location History soon after they visit. This change will take effect in the coming weeks.”

Google also says it is adding a Fitbit feature that enables women to delete their menstruation logs in batches, rather than one at a time.

Trusted Take

So well done to Google, I guess? It’s a shame it took this tyrannical removal of people’s right for the company to realise these deeply personal visits to healthcare providers might not be something they want sitting alongside their recreational visits to pubs, restaurants and concert venues in a overriding Location History folder.

TTlogo-379x201.png

What is POSIX (Portable Operating System Interface)?

What is POSIX (Portable Operating System Interface)?

POSIX (Portable Operating System Interface) is a set of standard operating system interfaces based on the Unix operating system. The most recent POSIX specifications — IEEE Std 1003.1-2017 — defines a standard interface and environment that can be used by an operating system (OS) to provide access to POSIX-compliant applications. The standard also defines a command interpreter (shell) and common utility programs. POSIX supports application portability at the source code level so applications can be built to run on any POSIX-compliant OS.

A brief history of the POSIX standard

The POSIX interfaces were originally developed under the auspices of IEEE. However, the POSIX standard is now being developed and maintained by the Austin Common Standards Revision Group, commonly referred to as the Austin Group.

The Austin Group is a joint working group made up of members from IEEE, The Open Group and Joint Technical Committee 1 of the International Organization for Standardization (ISO) and International Electrotechnical Commission (IEC). IEEE owns the POSIX trademark. The Open Group, which owns the Unix trademark, is a global consortium that develops technology standards.

POSIX emerged out of a need to make applications more portable across diverse systems. In the early days of computing, programmers had to rewrite their applications for each computer model and OS. This started to change when IBM introduced its line of System/360 computers, which could all run the same OS, OS/360. With these new systems, applications could be made more portable, saving enormous amounts of development time.

Soon after, Unix entered the scene and introduced an even greater potential for application portability. Unlike other OSes, Unix could run on machines from different vendors. However, multiple variations of the OS soon appeared, and the promise of portability quickly faded. Even so, Unix continued to grow in popularity, and it soon became apparent that some type of standard was needed to meet the challenges of application compatibility. That need led to the development of the POSIX specifications. Released in 1988, the first standard was based on AT&T Unix System V and Berkeley Software Distribution Unix, the two most prominent Unix systems at the time.

Basic Unix OS commands

What is the POSIX standard?

Over the years, the POSIX specifications have continued to be revised and reorganized. At one time, each standard was informally named POSIX, followed by a decimal and then the standard’s number. For example, POSIX.1 was the standard for an application programming interface in the C language, and POSIX.2 was the standard shell and utility interface for the OS. These standards were officially named IEEE Std 1003.1 and IEEE Std 1003.2, respectively.

There were also amendments to the base standard, such as IEEE Std 1003.1b-1993, which dealt with real-time extensions. However, all the various specifications have been rolled into one standard — IEEE Std 1003.1 — which was last updated in 2017 and published in 2018. Officially, it is called IEEE Std 1003.1-2017. However, it is also referred to as POSIX.1-2017 or, more informally, POSIX.1.

The POSIX standard goes by other names as well. The Open Group calls it The Open Group Base Specifications Issue 7, 2018 edition, and ISO/IEC refer to it as ISO/IEC 9945:2009. ISO/IEC adopted the standard in 2009 and added Technical Corrigendum 1 in late 2012 and Technical Corrigendum 2 in March 2017, putting it on par with IEEE Std 1003.1-2017.

The POSIX.1-2017 specifications define the fundamental services needed to build POSIX-compliant applications. They establish standard semantics and syntax to help developers write portable applications. POSIX.1 is made up of the following four volumes:

  1. Base definitions. Provides common definitions for the specifications, including information about terms, concepts, syntax, service functions and command-line
  2. System interfaces. Provides details about interface-related terms and concepts, and defines the functional interfaces available to applications accessing POSIX-conformant systems.
  3. Shell and utilities. Describes the commands and utilities available to applications accessing POSIX-conformant systems, including the command language used in those systems.
  4. Rationale. Includes historical information about the standard’s contents and why certain features were added or removed.

POSIX.1 takes a “write once, adopt everywhere” approach to the specifications by providing a set of fundamental services needed to efficiently build applications. The standard emphasizes the facilities and characteristics that are important to application development, rather than focusing on the techniques needed to achieve these capabilities. The POSIX standard is intended to be used by both system implementors and application developers.

Linux, which evolved from Unix, can be tricky to learn. Explore the main Linux components to better understand the OS.

rgktjqaixga756mh_1656826837.jpeg

Google could introduce face unlock feature with upcoming Pixel 7: Report

While the Pixel 6 and the Pixel 6 Pro were the first flagship smartphones from Google, they lacked the Face Unlock feature. In a report, 9to5Google compiled all the information available about Face Unlock on Google Pixel smartphones. Back in 2019, Google launched the Pixel 4 with dedicated hardware to enable the Face Unlock feature, called Soli radar, pretty much like the Face ID setup on an iPhone.

However, Google removed the entire setup from Pixel 5 which was launched in 2020. However, with the Pixel 6 Pro, Google had plans to introduce a face unlock feature. 

Last year, just before the Pixel 6 Pro came out, a leaked marketing copy and Play Store listing suggested that the smartphone will come with a face unlock feature, but that did not happen either. This year in April, the publication reported that Pixel 6 Pro was supposed to come with Face Unlock, but Google pulled the plug on the feature at the last moment due to some accuracy and battery-related concerns. However, Google might launch the feature with Pixel 7 and Pixel 7 Pro.

Google Pixel 7 Pro may have a brighter display

Google Pixel 7 Pro will reportedly feature a new Samsung S6E3HC4 panel. Although the panel has the same resolution as the one used on the Pixel 6 Pro, Android 13 beta’s source code suggests that the display on the Pixel 7 Pro will be slightly brighter than that on the Pixel 6 Pro. Tipster Mishaal Rahman noticed the new piece of code, which suggests that Pixel 7 Pro might achieve up to 600 nits of brightness in the manual mode, which is about 20% higher than what users could set on Pixel 6 Pro.

Further, the tipster also suggests that the Pixel 7 Pro’s auto mode could take the maximum brightness up to 1200 nits, which is almost 50% higher than the Pixel 6 Pro. In addition, the new S6E3HC4 panel could support advanced features like HLG imagery and HDR10. The refresh rate on Pixel 7 Pro could vary from 10Hz to 120Hz and the display might also support a native FHD resolution mode for increasing the effective battery life.

kh5JHReJzjgxpRWosFDKcM-1200-80.jpg

Search for free website hosting surges as economic downturn bites

Combined search for free website builders and free web hosting is at its highest in almost two years as people come to terms with the cost-of-living crisis and wider economic downturn.

From May to June 2022 alone, the search volume for free web hosting doubled, with there also being a sharp increase in the number of people looking for free website builder services.

check_g1252809597.jpg

Big tech regulation needs both privacy and antitrust reform

A better understanding of the intersection between data privacy and antitrust laws could lead to more effective regulation of tech giants.

That’s according to Erika Douglas, assistant law professor at the Temple University Beasley School of Law. Douglas recently spoke on how data privacy and antitrust laws can rein in the power of tech giants during the American Antitrust Institute’s 23rd annual policy conference in Washington, D.C.

Douglas said more work is needed to understand the interactions between the two types of laws, particularly before a federal data privacy law wins approval. There has been recent bipartisan momentum for a federal data privacy bill, the American Data Privacy and Protection Act, also known as the “three corners” bill.    

In this Q&A, Douglas discusses why a better understanding of the intersection between these two types of laws is critical, particularly for the success of antitrust cases already being brought against large tech companies like Google and Meta, which owns Instagram and Facebook.

Why do you think it’s important to discuss the intersection of antitrust and data privacy laws?

Erika Douglas

Erika Douglas: It’s a thousand antitrust actions and a thousand different privacy reforms, but until a couple of years ago, there was not very much dialogue between the two of them, particularly in the U.S. There’s a bit more of this discussion of how the two work together in the European Union because they have stronger privacy laws and competition laws and it’s enforced more often. The U.S. is coming around to these big battles against Google and Facebook. They don’t get much attention, but there are allegations in both the Google and the Facebook case that they used their monopoly power to erode privacy. Did they? Is that why privacy was eroded or was privacy eroded for some other reason? There are a lot of questions there.

What might some of the challenges be for antitrust law if a federal data privacy law were enacted?

Douglas: What is interesting here for antitrust is almost all of those statutes, including the three corners bill, are making data privacy more of a right. So are state laws. The flurry of mini General Data Protection Regulations [the European Union privacy law] at the state level are turning privacy into a right. And privacy in the U.S. has been a consumer protection interest, but it hasn’t been a right. A right is something different in law, and a right is more difficult for antitrust law to deal with because antitrust law equates everything to quantitative, monetary terms.

But if privacy is a right, I think that’s going to be a real challenge for antitrust law, because how do you compare something that’s quantifiable in competition to something that is a right or interest? That means we’ll have to think a little bit more about exceptions or immunities in between data privacy and antitrust law.

This is going to become an issue if and when the U.S. gets federal omnibus privacy law, which is still a big question mark in people’s minds. The three corners bill is definitely progress; it’s more bipartisan support than we’ve seen before.

What other challenges might arise from implementing a federal data privacy law without fully understanding its interaction with antitrust law? 

Douglas: When we do get that privacy law, there will be some big questions. You can see in that bill there’s a duty of loyalty [acting in the best interest of users] for example that requires data minimization [limiting user data collection]. But antitrust law is really seeking data flow [greater exchange of data between companies] right now. There are all these proposals for legislation in antitrust law that would mandate interoperability. So how do you reconcile at a policy level and then at a legislative level data minimization on one hand, and data flow on the other hand? There are ways to reconcile them, and there are ways to have interoperability that maintains privacy, but the two legislative arcs seem to be going in different directions.

There has to be better understanding here to have effective digital policy.
Erika DouglasAssistant law professor, Temple University Beasley School of Law

As policymakers consider data privacy law and antitrust law reform, should they be taking these conflicts into consideration?

Douglas: You can see some consideration of it for example in Klobuchar’s bill, the American Choice and Innovation Online Act, that would impose mandatory [data] interoperability. There is an exception for privacy, but privacy isn’t defined. If we think competition is more important than privacy, don’t include exceptions. If you’re going to include exceptions, what does that mean? Because large digital platforms are going to try to fall within the exceptions related to privacy, which are not defined in the legislation. There’s a thin consideration of it on the antitrust side. On the privacy side, there’s almost no consideration of antitrust law. To be fair, in the three corners bill, there are exceptions for compliance with other federal law that could, without mentioning antitrust law, potentially apply to antitrust law.

Will a better understanding of the intersection of data privacy and antitrust law ultimately lead to better regulation of digital giants?

Douglas: It definitely will lead to more sophisticated and nuanced regulation that has more chance of success. We’ve talked about the tension between these two areas, but there are many commonalities. The tensions are a [more complex] digital policy question to answer. They’re both seeking to encourage consumer choice, and they’re both combatting corporate power. But if they’re going to do that in a way that is successful and leads to comprehensive, effective digital policy, we need to think about those escape hatches that are appropriate on each side, or how the law might be used in a way that’s unexpected because one side argues there’s a conflict. You can see this playing out in cases like Epic vs. Apple, where Apple is engaging in anticompetitive conduct according to the Northern District of California, but Apple then proved data privacy considerations were a justification. It’s a useful example. If these two areas of law don’t think about their interactions with each other, then big cases against big platforms might be unsuccessful for privacy law reasons. There has to be better understanding here to have effective digital policy.

Editor’s note: Responses have been edited for brevity and clarity.

Makenzie Holland is a news writer covering big tech and federal regulation. Prior to joining TechTarget, she was a general reporter for the Wilmington StarNews and a crime and education reporter at the Wabash Plain Dealer.

image_2022-06-29_14-17-43.png

A Response to WEB 3 Benefits Debate By WAM

Nowadays we’re facing major paradigm shifts, faster than we could ever imagine. The question is, should we keep up with them? Is it all groundbreaking or just crafted blabbing?

We’re in Web 3.0. But what is it actually and what are the benefits?

Web3 is a new era on the internet. It’s a place where users have agency over their data, their money, and their identity.

It uses blockchains, cryptocurrencies, and NFTs to give power back to the users in the form of ownership.

It’s a decentralized network, the internet that we’ve always wanted—one where users have control over their data, and the ability to move it wherever they want.

All of this happens on a blockchain-based platform where everything is open source and transparent.

In a recent interview with Tyler Cowen, Marc Andreessen, the tech entrepreneur, and crypto investor struggles to explain a single Web 3.0 use case.

When asked by Tyler Cowen “What is the concrete advantage of Web 3.0 for podcasts? And why a podcast is better through Web 3.0, why can’t we just put it out there? Marc replies pretty sloppily that the “most obvious thing is money.

You don’t get paid” and then continues saying that podcasters can also “pick their business models” and then struggles to explain what really are the advantages of Web 3.0.

As mentioned hitherto, Web 3.0 is all about freedom, and an answer to the questions above is that it affords podcasting a tamperproof persistence. Irrespective of current year politics, you as a podcaster are free, truly free, to discuss and host whatever content you wish to host.

The current status quo is that at any moment a group of highly motivated individuals can pressure the hosting platform and distribution systems to de-platform your content. So you are not free in any meaningful sense with the current layout of Web 2.0.

These are just a few examples, but Web3.0 affords creators the ability to detach themselves from the silo problems of Web 2.0, where every single platform is doing everything it can to corner its respective market and trap users within its own ecosystem.

Intrigued by this topic, in a video response to that interview, Daniel Tamas, the CEO of WAM, the first hyper-casual gaming platform on the blockchain where you can earn crypto and NFTs, gave some pretty interesting inputs regarding the benefits of blockchain and the main differences from Web 2.0.

He states that there has always been a debate between old and new technologies and uses an analogy with electrical current, that served to light up the streets with candles and gas, an immediate outcome, but 

“in time it proved to be the technological leap mankind needed in order to advance and create. All these blockchain technologies that currently emerge are still in their infancy.

If you talk about blockchain today, most people talk about coins that get sold on exchanges. Everybody looks at the bear market or the bull market because that’s where the fun is and people imagine themselves to be billionaires.

This means the blockchain itself is already a wealth creation tool, but it’s a layer one creation tool.”

Basically, the blockchain is the infrastructure for the next generation of businesses like WAM. Businesses are starting to explore new models that are better, faster, and cheaper, to create wealth for themselves and for other people.

And “this is where dApps come in. Because for blockchain to go mainstream, most people need a utility for themselves.

This utility is simple, you just do a thing that you have always done, in a new and better way. WAM is one of the ways people will experience blockchain and create wealth for themselves.”

By now some might be convinced of Web 3.0 and blockchain’s advantages, though some might wonder why should they use blockchain?

Why blockchain in games when you could do it in Web 2.0 games? You add a variety of scams and get demoralized while losing hope and interest in something that carries a whole lot of untapped potential.

Speaking of  the dangers and scams, to Daniel “it has always been like this when you have a predatory behavior and people who exploit the natural desire of men and women to be wealthy, without putting too much effort in it.” So nothing new under the sun.

He believes that we should ask ourselves “What exactly is this technology best used for?” Given that “it’s one of the few opportunities in 100 years to enable wealth creation and wealth redistribution for a lot of people that have not had access to it.”

So the old money generations are facing now “a new class of wealthy people. A whole lot of them. So why is this not a benefit?”

Will the WAM team stop? “No way in hell. It’s exciting to work on something that we know will change how people think when doing things online.” And we’re all about it.

The future is bright for those who have the right tools in their hands, put the right questions, and do the work. Being unbiased it’s hard, but we find it refreshing to see a rational, balanced, and greatly articulated intake on Web 3.0 and blockchain. 

Check out the full video here.

Disclaimer

All the information contained on our website is published in good faith and for general information purposes only. Any action the reader takes upon the information found on our website is strictly at their own risk.

arvr_g1263041709.jpg

Tuning enterprise architecture to cope with accelerating change

Change is not a constant: it’s far worse than that.

Accelerating rates of change mean that businesses can no longer plan years ahead. Rather they must continually adjust and readjust as new threats and opportunities emerge. And to do that, they must be agile. We commonly think of agility as a measure of how quickly, cheaply, safely, accurately and repeatably businesses can adapt to unexpected change. What’s less appreciated is that agility operates at two different levels, much like economics, and to succeed, businesses need to excel at both microagility and macroagility.

Microagility is what most people think of when they hear the word “agility,” as it’s what gets the most attention today. It relates to the business and technology change activities of individuals, teams and product or projects units, and can be improved through the application of techniques such as iterative software development, DevOps, service automation, the use of cloud-based utility computing or robotic process automation for prototyping. In short, it comprises all the things that are usually brought up in discussions about agility.

The problem is that microagility can only be effective when the teams or individuals have sufficient autonomy and independence to enable them to do their work without continual reference to other teams for information, permission, approval, or for delivery of some of the work. This is where macroagility comes in, and it is at least as important, yet it’s not often discussed.

Macroagility applies to business and technology change activities on the larger scales of platforms, programs, business units, divisions and of the entire enterprise. It is enabled in part by an organization’s culture — the degree of autonomy it delegates to business units and its attitude toward innovation — but mostly by its operating model and technology architecture, both of which are designed at the macro level by business and enterprise architects.

In short, macroagility creates the environment for an organization’s microagility to thrive and to deliver change quickly, cheaply and reliably. Without it, businesses will still struggle to deliver change quickly, no matter how much they invest in DevOps and agile delivery methods. But with it, significant structural change such as acquisitions and divestments, and major innovation such as the launch of a new brand or product category, can become almost routine, instead of requiring Herculean efforts to achieve.

Barriers to agility at the macro level

What hinders agility at the macro level of the enterprise? In a word: complexity, or what a lot of organizations call technical or architectural debt — everything that makes systems or processes hard to understand and, therefore, also makes them hard to change. And the greatest controllable source of complexity is: dependencies that cross system or organizational boundaries.

Take the case of one large retail bank that had over 600 software applications involved in the selling and servicing of mortgages. There were thousands of interconnections, each one of which contributed up to a dozen interdependencies, and each one of which had been designed by an architect at some point in the preceding decades. None of them had intended for this level of complexity eventually to emerge, and yet it had. Why?

The problem is that enterprise architects, unversed in the causes of complexity, can unwittingly add dependencies without even realizing it. Every arrow drawn on a whiteboard to indicate a new system interface, every database shared between applications, every additional handoff in a business process, every new governance forum, in practice, adds dependencies which, as well as having an implementation cost, add friction whenever something needs to change. And not just linearly, because of the “starburst effect,” whereby an intended simple change to one system ripples out through many interconnected systems. As changes proliferate, exponentially more coordination and regression testing are required. Agile this is not.

The job of the enterprise architect should be to understand the causes of complexity and to deliver simplicity, because the simpler the system, the faster we can change it. Yet instead of minimizing complexity, many enterprise architects measure their success using metrics such as level of reuse, or conformance with standards, or a reduction in duplication of capabilities. Real business value, however, is created when architects partition the enterprise in a way that maximizes business agility.

Fortunately, there are proven actions that enterprise architects can take to reduce complexity and thereby promote agility.

Reducing and minimizing dependencies

Fortunately, there are proven actions that enterprise architects can take to reduce complexity and thereby promote agility.

The first is to remove complexity in the same way that it was originally introduced — one architecture decision at a time. But in doing so, architects must observe a new constraint: The best solution option is the one that minimizes the complexity of the resultant architecture. This may involve relocating some existing functionality, perhaps by carving it off to a new microservice. With a relatively small initial investment, applied to a change “hotspot,” such an initiative can become self-funding in just a few months, as changes in this area become easier and cheaper to deliver than originally planned.

The second involves the excision of large amounts of complexity by replacing significant subsets of the architecture as part of a wider transformation. Typical examples would be core banking replacements or renewals of ERP platforms. Whilst significant transformation programs such as these are usually initiated by business leaders rather than architects, enterprise architects can help to make them successful. For example, architects can specify the use of proven façade-based patterns, such as the strangler fig, that enable incremental transformation. This approach delivers incremental business value and reduces the risk of business disruption, decoupling the systems that surround and depend upon the ones that are being replaced.

The right kind of enterprise architects

The defining feature of a great enterprise architect is a passion for simplicity, of the kind described above. They are someone who understands, as Fred Brooks did as far back as 1975, that the secret to making change happen faster is to partition the problem — and thence the solution — in such a way that adding more people speeds things up more-or-less linearly.

Product-based organizations and microservices-based architectures epitomize this philosophy, and the extraordinary agility and scalability of the tech giants are testament to their superiority relative to the traditional operating models and monolithic architectures of yesterday’s corporate Titans. But for everyone else, the challenge is how to undo the pernicious effect of past architectural decisions on their business agility, and in the process enable them to compete vigorously in their chosen markets.

If you are looking at how best to drive agility in your organization, these three key actions are a great way to start:

  • Firstly, have your enterprise architects set guardrails for engineers that reduce and minimize architectural debt at the component level.
  • Secondly, while your engineers continue to improve their microagility, encourage your enterprise architects to focus on macroagility and see where simplifications can be made at the next level up that will increase the engineers’ autonomy.
  • Finally, start to measure enterprise architects on reducing the dependencies inherent in your business and technology architectures.

By taking these three steps, companies can set their enterprises on the path to a more agile future and ultimately be better equipped to cope with constant ever-accelerating change.

About the author

Peter McElwaine-Johnn is a principal director in Accenture’s Technology Strategy and Advisory practice, based in London, U.K. He specializes in technology strategy and architecture, and advises mainly banking and government clients on digital transformation and architecture simplification.

4351568_700.jpg

AIS named Microsoft’s ‘partner of the year’

Company receives praise for innovation


Mr Tanapong, left, is congratulated by Dhanawat Suthumpun, managing director of Microsoft Thailand, for being awarded the 2022 Microsoft Thailand Partner of the Year award.

Advanced Info Service (AIS), Thailand’s biggest mobile operator by subscribers, has received the 2022 Microsoft Thailand Partner of the Year award.

The telecommunication giant was recognised for demonstrating excellence in innovation and implementation of customer solutions based on Microsoft technology.

Tanapong Ittisakulchai, chief enterprise business officer at AIS, said Microsoft and AIS has had a strategic partnership in Thailand since 2015.

“We have accelerated the adoption and built new distinctive solutions of Microsoft across the cloud, data, cybersecurity, Internet of Things and 5G-driven cloud innovations for businesses in Thailand, including specific product and service combinations that deliver optimised performance for cloud and edge computing with strong revenue growth of more than three digits year-on-year,” he said.

Mr Tanapong added that through Microsoft’s expertise and resources, AIS had become a more efficient provider.

“We are the first Thailand Microsoft partner recognised as an Azure advanced specialised partner, representing our IT consultancy’s expertise in Azure technologies and cloud migration. We use the Microsoft partner network to build our people’s competency and capabilities to enable Thai organisations of all sizes and scales to confidently move forward and contribute to the growing digital economy in Thailand.”

The Microsoft Partner of the Year Award recognises Microsoft partners that have developed and delivered outstanding Microsoft-based applications, services and devices during the past year. Awards are classified in various categories, with honorees chosen from more than 3,900 nominations submitted worldwide.

“These partners were outstanding among an exceptional pool of nominees, and I’m continuously impressed by their innovative use of Microsoft Cloud technologies and the impact on their customers,” said Nick Parker, corporate vice-president of Global Partner Solutions at Microsoft.

AIS has launched more than 12 solutions over the past year to scale Thai business across all segments, such as migrating the on-premises infrastructure of the leading retail business in Thailand to Azure environments from assessment, design and implementation to support.

Following the migration, AIS reported that customers noticed how Azure had helped cut down the investment cost by 20%.