Latest News for 4TC
We have loads to say!
We have loads to say!
With cloud costs and complexity higher than expected, many enterprises are making a U-turn and putting applications and data back in traditional systems.
Here’s a topic we don’t discuss as much as we should: public cloud repatriation. Many regard repatriating data and applications back to enterprise data centers from a public cloud provider as an admission that someone made a big mistake moving the workloads to the cloud in the first place.
I don’t automatically consider this a failure as much as an adjustment of hosting platforms based on current economic realities. Many cite the high cost of cloud computing as the reason for moving back to more traditional platforms.
High cloud bills are rarely the fault of the cloud providers. They are often self-inflicted by enterprises that don’t refactor applications and data to optimize their cost-efficiencies on the new cloud platforms. Yes, the applications work as well as they did on the original platform, but you’ll pay for the inefficiencies you chose not to deal with during the migration. The cloud bills are higher than expected because lifted-and-shifted applications can’t take advantage of native capabilities such as auto-scaling, security, and storage management that allow workloads to function efficiently.
It’s easy to point out the folly of not refactoring data and applications for cloud platforms during migration. The reality is that refactoring is time-consuming and expensive, and the pandemic put many enterprises under tight deadlines to migrate to the cloud. For enterprises that did not optimize systems for migration, it doesn’t make much economic sense to refactor those workloads now. Repatriation is often a more cost-effective option for these enterprises, even considering the hassle and expense of operating your own systems in your own data center.
In a happy coincidence, the prices of hard drive storage, networking hardware, computer hardware, power supplies, and other tech gear dropped in the past 10 years while cloud computing costs remained about the same or a bit higher.
Business is business. You can’t ignore the fact that it makes economic sense to move some workloads back to a traditional data center.
It makes the most sense to repatriate workloads and data storage that typically do a lot of the same thing, such as just storing data for long periods of time without any special data processing (e.g., no advanced artificial intelligence or business intelligence). These workloads can often move back to owned hardware and show a net gain ROI. Even with the added costs to take over and internalize operations, the enterprise saves money (or a lot of money) compared to equivalent public cloud hosting.
However, don’t forget that many workloads have dependencies on specialized cloud-based services. Those workloads typically cannot be repatriated because affordable analogs are unlikely to run on traditional platforms. When advanced IT services are involved (AI, deep analytics, massive scaling, quantum computing, etc.), public clouds typically are more economical.
Many enterprises made a deliberate business decision at the time to absorb the additional costs of running lifted-and-shifted applications on public clouds. Now, based on today’s business environment and economics, many enterprises will make a simple decision to bring some workloads back into their data center.
The overall goal is to find the most optimized architecture to support your business. Sometimes it’s on a public cloud; many times, it’s not. Or not yet. I learned a long time ago not to fall blindly in love with any technology, including cloud computing.
2023 may indeed be the year we begin repatriating applications and data stores that are more cost-effective to run inside a traditional enterprise data center. This is not a criticism of cloud computing. Like any technology, cloud computing is better for some uses than for others. That “fact” will evolve and change over time, and businesses will adjust again. No shame in that. Source: 2023 could be the year of public cloud repatriation | InfoWorld
Depending on who you ask, some say that quantum computers could either break the Internet, rendering pretty much every data security protocol obsolete, or allow us to compute our way out of the climate crisis.
These hyper-powerful devices, an emerging technology that exploits the properties of quantum mechanics, are much buzzed about.
Only last month, IBM unveiled its latest quantum computer, Osprey, a new 433 qubit processor that is three times more powerful than its predecessor built only in 2021.
But what is all the hype about?
Quantum is a field of science that studies the physical properties of nature at the scale of atoms and subatomic particles.
Proponents of quantum technology say these machines could usher in rapid advances in fields like drug discovery and materials science – a prospect that dangles the tantalising possibility of creating, for example, lighter, more efficient, electric vehicle batteries or materials that could facilitate effective CO2 capture.
With the climate crisis looming, and technology with a hope of solving complex issues like these are bound to draw keen interest.
Little wonder then, that some of the largest tech companies in the world – Google, Microsoft, Amazon, and, of course, IBM to name a few – are investing heavily in it and angling to stake their place in a quantum future.
Given these utopic-sounding machines are drawing such frenzied interest, it would perhaps be useful to understand how they work and what differentiates them from classical computing.
Take every device that we have today – from the smartphones in our pockets to our most powerful supercomputers. These operate and have always operated on the same principle of binary code.
Essentially, the chips in our computers use tiny transistors that function as on/off switches to give two possible values, 0 or 1, otherwise known as bits, short for binary digits.
These bits can be configured into larger, more complex units, essentially long strings of 0s and 1s encoded with data commands that tell the computer what to do: display a video; show a Facebook post; play an mp3; let you type an email, and so on.
But a quantum computer?
These machines function in an entirely different way. In the place of bits in a classical computer, the basic unit of information in quantum computing is what’s known as a quantum bit, or qubit. These are typically subatomic particles like photons or electrons.
The key to a quantum machine’s advanced computational power lies in its ability to manipulate these qubits.
“A qubit is a two-level quantum system that allows you to store quantum information,” Ivano Tarvenelli, the global leader for advanced algorithms for quantum simulations at the IBM Research Lab in Zurich, explained to Euronews Next.
“Instead of having only the two levels zero and one that you would have in a classical calculation here, we can build a superposition of these two states,” he added.
Superposition in qubits means that unlike a binary system with its two possible values, 0 or 1, a qubit in superposition can be 0 or 1 or 0 and 1 at the same time.
And if you can’t wrap your head around that, the analogy often given is that of a penny.
When it is stationary a penny has two faces, heads or tails. But if you flip it? Or spin it? In a way, it is both heads and tails at the same time until it lands and you can measure it.
And for computing, this ability to be in multiple states at the same time means that you have an exponentially larger amount of states in which to encode data, making quantum computers exponentially more powerful than traditional, binary code computers.
Another property crucial to how quantum computing works is entanglement. It’s a somewhat mysterious feature of quantum mechanics that even baffled Einstein in his time who declared it “spooky action at a distance”.
When two qubits are generated in an entangled state there is a direct measurable correlation between what happens to one qubit in an entangled pair and what happens to the other, no matter how far apart they are. This phenomenon has no equivalent in the classical world.
“This property of entanglement is very important because it brings a much, much stronger connectivity between the different units and qubits. So the elaboration power of this system is stronger and better than the classical computer,” Alessandro Curioni, the director of the IBM Research Lab in Zurich, explained to Euronews Next.
In fact, this year, the Nobel Prize for physics was awarded to three scientists, Alain Aspect, John Clauser, and Anton Zeilinger, for their experiments on entanglement and advancing the field of quantum information.
So, in an admittedly simplified nutshell, these are the building blocks of how quantum computers work.
But again, why do we necessarily need such hyper-powerful machines when we already have supercomputers?
“[The] quantum computer is going to make, much easier, the simulation of the physical world,” he said.
“A quantum computer is going to be able to better simulate the quantum world, so simulation of atoms and molecules”.
As Curioni explains, this will allow quantum computers to aid in the design and discovery of new materials with tailored properties.
“If I am able to design a better material for energy storage, I can solve the problem of mobility. If I am able to design a better material as a fertiliser, I am able to solve the problem of hunger and food production. If I am able to design a new material that allows [us] to do CO2 capture, I am able to solve the problem of climate change,” he said.
But there could also be some undesirable side effects that have to be accounted for as we enter the quantum age.
A primary concern is that quantum computers of the future could be possessed of such powerful calculation ability that they could break the encryption protocols fundamental to the security of the Internet that we have today.
“When people communicate over the Internet, anyone can listen to the conversation. So they have to first be encrypted. And the way encryption works between two people who haven’t met is they have to rely on some algorithms known as RSA or Elliptic Curve, Diffie–Hellman, to exchange a secret key,” Vadim Lyubashevsky, a cryptographer at the IBM Research Lab in Zurich, explained.
“Exchanging the secret key is the hard part, and those require some mathematical assumptions which become broken with quantum computers”.
In order to protect against this, Lyubashevsky says that organisations and state actors should already be updating their cryptography to quantum-safe algorithms ie. ones that cannot be broken by quantum computers.
Many of these algorithms have already been built and others are in development.
“Even if we don’t have a quantum computer, we can write algorithms and we know what it will do once it exists, how it will run these algorithms,” he said.
“We have concrete expectations for what a particular quantum computer will do and how it will break certain encryption schemes or certain other cryptographic schemes. So, we can definitely prepare for things like that,” Lyubashevsky added.
“And that makes sense. It makes sense to prepare for things like that because we know exactly what they’re going to do”.
But then there is the issue of data that already exists which hasn’t been encrypted with quantum-safe algorithms.
“There’s a very big danger that government organisations right now are already storing a lot of Internet traffic in the hopes that once they build a quantum computer, they’ll be able to decipher it,” he said.
“So, even though things are still secure now, maybe something’s being transmitted now that is still interesting in ten, 15 years. And that’s when the government, whoever builds a quantum computer, will be able to decrypt it and perhaps use that information that he shouldn’t be using”.
Despite this, weighed against the potential benefits of quantum computing, Lyubashevsky says these risks shouldn’t stop the development of these machines.
“Breaking cryptography is not the point of quantum computers, that’s just a side effect,” he said.
“It’ll have hopefully a lot more useful utilities like increasing the speed with which you can discover chemical reactions and use that for medicine and things like that. So this is the point of a quantum computer,” he added.
“And sure, it has the negative side effect that it’ll break cryptography. But that’s not a reason not to build a quantum computer, because we can patch that and we have patched that. So that’s sort of an easy problem to solve there”.
Source: What is quantum computing and how will quantum computers change the world? | Euronews
As modern technology continues to advance in ways that satisfy the human desire for instant gratification, consumers are placing more emphasis on speed as a key feature when choosing their product vendors.
Whether you choose to blame this on the world’s introduction to old-school instant messaging or Amazon’s two-day shipping, at the end of the day, the demand for speed affects businesses and organizations like it never has before. But, this demand also means that businesses and organizations must step up their operations to keep up with the competition.
Software solutions that can enable companies to pick up the pace and complete their processes at a faster rate are highly favored, with this feature valued second only to reliability. Fortunately, organizations looking for a way to advance in their data processing can adopt edge computing technology in order to carry out their operations quickly but still trust that their data is secure.
Azure Stack Edge is a hardware as a service that enables organizations to access, utilize and analyze their applications and data locally.
Users can run their containerized applications at the edge location where data is generated and gathered. From there, the data can be analyzed, transformed and filtered, and users can control the data that they choose to send to the cloud. Their edge device also serves as a cloud storage gateway for easy data transfers between the cloud and the edge location.
Working with Azure’s edge solution makes it easy to utilize Azure’s other integratable products, so organizations can generate and train their machine learning (ML) models in Azure and benefit from quick data analysis and insight access.
Azure has several versions of their edge devices within their Azure Stack Edge Pro Series, granting users and organizations more options with a greater selection of features and capabilities to choose from, so they can get the tool that suits their needs.
HPE Edgeline supports edge computing and processing through its various converged edge systems. Its systems provide IT functionality optimized for edge operating environments, enabling users to benefit from edge storage, computing and management.
Their solutions are purpose-built for the edge, with autonomous operations, local decision-making in real-time, and easy scaling across sites and locations. As for security, HPE Edgeline Integrated System Manager provides IT-grade security to support the deployment and operation of their Edgeline systems.
The HPE Edgeline Converged Edge systems serve as a distributed converged edge compute model, so users can manage their operations and data in real time, even without an internet connection. In addition, its systems connect open standards-based operations technology (OT) data acquisition and control technologies directly to the user’s organizational IT system, reducing latency and saving space.
HPE has various enterprise-class converged edge system tools for customers to choose from, with features optimized for different use cases. Additionally, HPE’s Edgeline OT Link Platform software is also offered and supports users’ edge activities like data flow and integration management.
ClearBlade’s technology works to streamline edge data by connecting device sensors or event data and transporting it into cloud data lakes, enterprise systems or artificial intelligence (AI) tools through built-in integrations. Its software can connect via REST, MQTT and sockets in addition to its prebuilt connections for third-party systems.
The solution lets users choose to keep their functions local within their edge locations or transfer them within cloud storage and vice versa. Users can also complete various data processes at the edge, including data analysis, modification, routing, storage and management.
ClearBlade keeps users’ data secure with encryption, authentication and authorization of application programming interface (API) access. It enables connections across all user clouds, gateways and devices with various protocols.
Another aspect of its edge computing technology is offline continuity, which ClearBlade advertises as a perk to its software. Even when an internet connection is lost, all edge devices are able to continue their real-time behaviors.
The Eclipse Foundation’s Eclipse ioFog is an edge computing platform for processing enterprise-scale data and applications at the edge. By processing users’ data at the point of creation with an edge-centric compute architecture, users can gain more functionality and greater security for all of their data and application processes.
ioFog’s universal edge computing platform enables users to create and remotely deploy their microservices to their edge computing devices by providing a common computing platform that lets software run on any device.
Users can deploy and manage their multiple edge devices at once as an edge compute network, which ioFog manages automatically. ioFog can manage and transfer any data type and supports native Geofencing of users’ data, notes and routing.
To secure users’ edge activities, each node within the edge compute network is part of a distributed trust network and is constantly validating security protocols with all of the other nodes, monitoring for deviations. The data transfer and communications between notes occur via session-based MicroVPNs, which ioFog creates as a method of enforcing security for its users.
Google Distributed Cloud Edge users can now maintain their data use and storage according to their workload needs and requirements by utilizing any of Google’s 140+ network edge locations worldwide or their own localized, customer-owned edge locations. Google also supports Google Distributed Cloud services across the customer’s operator’s edge network and customer-owned data centers.
Users can use the open-source platform Anthos on their Google-managed hardware at their edge location to run their applications securely on a remote services platform. This way, they can locally process their data and transfer or modernize their applications with Google Cloud services.
By leveraging the capabilities of Google Distributed Cloud Edge, users can run local data processing, modernize their on-premises environments, run low-latency edge compute workloads and deploy private 5G/LTE solutions.
The software also connects with third-party services, granting greater accessibility within customers’ own environments.
Alef’s Edge API Platform enables organizations to manage their applications at the edge through mobile connections. In addition, users can develop their own private mobile LTE networks with API connections and firewall protection.
The APIs allow users to manage their mobile connectivity for Industry 4.0 applications. Additionally, deploying mobile networks as a service at the edge can allow users to create an easy-to-use private LTE network without on-site mobile network installation.
Alef’s system increases speed for users, as connecting to their edge enables them to access services within 50 milliseconds of any U.S. enterprise. Furthermore, by simplifying network complexities, Alef has reduced the time to launch to 60 minutes. And by orchestrating their operations and workloads at the edge with their core IP, organizations benefit from lowered latency for their applications.
Finally, leveraging Alef’s edge solutions means being able to connect to any spectrum, any EPC (Evolved Packet Core) and any cloud. Users can connect to 5G/4G/3G spectrums or their own Wi-Fi and manage data traffic across any cloud provider. And Alef is agnostic, so organizations can partner with their EPC or choose to bring their own.
Cisco offers several edge computing solutions for users to deploy their services on their own developed edge computing infrastructure.
Users can design an edge computing infrastructure for their workloads that enable them to separate their network functions and optimize their resources with software-centric solutions that they can procure separately. Additionally, their fixed edge and mobile networks can share 5G core-based infrastructure coverage for greater efficiency in their operational processes.
Application developers can benefit from using Cisco’s open-edge computing model. IT can enable them to mitigate congestion in the core and meet local demands, as applications can access information about local conditions in real time. Additionally, close proximity to subscribers and real-time network data access can enable a better application user experience.
Cisco’s edge computing model can deliver high-quality data and application performance and security. By distributing the users’ computing capacity to the edge, users can benefit from lower latency to end devices, greater network efficiency with edge offloading and reduced costs for data transportation.
Infiot, recently acquired by Netskope, is a secure access service edge (SASE) platform that provides edge intelligence with AI-driven components. Netskope Borderless WAN will now integrate Infiot’s ZETO technology to further its edge functionalities for Netskope customers.
The combined technology of Infiot and Netskope will now be able to provide further built-in routing, policy-based traffic control, wired and wireless networking, and integrated network security functions for edge deployment.
Furthermore, Netskope customers will be able to utilize cloud-first networking through the use of Netskope SASE Gateways for secure connections between any enterprise location.
The solution’s developments will help improve Netskope customers’ performance speed, cloud visibility and application activity with the addition of this technology.
Edge computing is the process of utilizing a software system to process data closer to the data’s source and use location. Processing data with this solution can reduce the time it would take to analyze the data compared to having first to transfer it to a data center or cloud. This is because it shortens the latency time that would be required to move the data back and forth.
Organizations that choose to store and leverage their business data are constantly increasing their data volumes. The ever-increasing volume and complexity of this data have caused the need for more space and have led to latency issues. However, edge computing software is capable of more significant amounts of data with reduced latency, which means more easily accessible data insights for organizations.
However, an edge computing solution can do more than just reduce the data processing time compared to cloud computing. Organizations can gain a vast range of additional benefits by processing data at a location at the edge of a network and utilizing systems at those physical locations.
Organizations that utilize cloud-based data analysis tools across all of their business enterprises can increase their risk of security breaches and potential data loss, as an attack on a connected solution could affect all of their organizational operations. With edge computing, a security breach would have less impact on the entire organization, and only the transferred data could be affected.
Conducting on-site data analysis with edge computing devices also means that the analyzed data is safeguarded by the localized firewalls that protect the enterprise.
Edge computing can also allow organizations to control their data flow and storage that takes place within their edge locations, enabling them to manage their data in a way that lowers data redundancy, reduces bandwidth and lowers operation costs.
Edge computing is also arguably more reliable than cloud computing. Using edge devices to save and process data locally can mean having greater access to data than entrusting remorse data storage location, as edge device users won’t need to worry about their internet connectivity issues when trying to process and access their data. Regardless of connectivity issues, organizations can have access to their network data stored in their edge solutions.
There are several features and capabilities that are common in edge computing products. Edge computing software solutions generally provide features that enable real-time access to local information to support immediate action. In addition, many edge computing software solutions will provide automated features that can occur regardless of unreliable or inaccessible internet connectivity.
To keep users’ data and applications secure edge computing software should come with security features. Common security features of these solutions may include on-premises security, isolated operating environments, edge device monitoring and authorizations, authentication, and encryption layers. Alternatively, edge software solutions may support connections with third-party security services.
Other beneficial features of edge computing systems should support the management of the organization’s data storage, analysis and transfer processes. This can include features that provide users with visibility into their data center and operations and even cloud operations.
In addition, granting users greater control over their data flows can help them facilitate easy scaling across locations, supported by an easily manageable and configurable architecture.
In the last article we found out that ISO 9001 is the international standard that specifies requirements for a quality management system (QMS). And that most businesses use this standard to demonstrate the ability to consistently supply products and services that meet customer and regulatory requirements.
In this article we are going to look at seven focus areas to help to businesses to keep these standards ISO 9001 has seven key principles that it pushes as important:
Senior management aren’t the only people who ISO 9001 is for. Your whole organisation contribute towards it’s processes. If you wanted to fully benefit from your quality management ISO then you are going to need to openly discuss issues and share knowledge and experience with your team. It is paramount that everyone in your company understands their contribution to its success and feels valued for it. This will demonstrate your businesses commitment to improving quality and will help to achieve certification.
You could possibly want to consider some awareness training to help to raise awareness of ISO 9001 and the benefits it brings. There are plenty of online courses that could be very informative and useful for your business personnel.
A really great way of showing your commitment to quality us developing a strong customer focus. So that you can strengthen your business and its performance even further it is very important to gather customer feedback good or bad. This can help you to spot non conformities and improve your processes.
Your company should take into account not only the interests of the consumers, but also those of other stakeholders, including owners, employees, suppliers, investors, and the general public.
Strong leadership entails having a distinct vision for the future of your business. Effectively communicating this vision will guarantee that every team member is working toward the same goals, providing your organisation a sense of unity. As a result, employee motivation and productivity may increase.
The ISO 9001 Standard’s Plan Do Check Act (PDCA) principle will assist you in fostering a process-driven culture throughout your organisation. This is a tried-and-true method to guarantee that you efficiently plan, resource, and manage your processes and interactions.
You may align operations for improved efficiency and make it easier to reach your goals by managing the many sections of your organisation as a whole. You can find areas for improvement by measuring and analysing these interconnected processes.
The ISO 9001 quality management system depends on continuous improvement, which is why it should be your company’s main goal. You can uncover ways to enhance and strengthen your business by putting processes in place for identifying risks and opportunities, spotting and resolving non-conformities, and measuring and monitoring your efforts.
Making informed judgments requires access to accurate and trustworthy data. For instance, you need the appropriate evidence to identify the underlying reason of a non-conformity. Ensure that individuals who require information can access it and maintain open lines of communication.
It’s possible for your suppliers to give you a competitive edge, but this demands a partnership based on trust. Long-term, mutually beneficial methods must be balanced with short-term financial rewards in order to forge such enduring partnerships with suppliers and other interested parties.
During the ISO 9001 certification process, putting these seven quality concepts into practise can assist you in fulfilling important Standard requirements. As a result, you will be able to raise employee engagement and productivity, customer happiness and loyalty, and resource usage.
By putting these seven quality concepts into practise, you may help yourself meet crucial Standard requirements during the ISO 9001 certification process. You will be able to increase resource consumption, customer satisfaction and loyalty, employee engagement, and productivity as a result.
4TC can support you with all the services you need to run your business effectively, from email and domain hosting to fully managing your whole IT infrastructure.
Setting up a great IT infrastructure is just the first step. Keeping it up to date, safe and performing at its peak requires consistent attention.
So we can act as either your IT department or to supplement an existing IT department. We pride ourselves in developing long term relationships that add value to your business with high quality managed support, expert strategic advice, and professional project management.
The Quality Management Systems (QMS) creation worldwide standard, ISO 9001, was released by ISO (the International Organization for Standardization). The current standard is referred to as ISO 9001:2015 as it was most recently updated in 2015. For ISO 9001 to be produced and updated, it needed to be approved by the majority of member nations in order to be recognised as an international standard, which means it is accepted by most nations on the planet.
What are quality management systems? An ISO 9001 description would be that this standard provides the QMS requirements to be implemented for a business that wants to develop all of the policies, processes, and procedures required to offer products and services that fulfil customer and regulatory needs and enhance customer satisfaction. The cornerstone of quality assurance activities is quality management systems.
As was already said, ISO 9001:2015 is a widely accepted standard for developing, implementing, and upholding a company’s quality management system. It can be utilised by any business and is intended for usage by organisations of any size and in any sector. Because it is an accepted international standard, many organisations demand this certification from their suppliers as the foundation for any business creating a system to guarantee customer happiness and progress.
Your consumers will feel more secure knowing that you have a Quality Management System in place that is based on the seven ISO 9001 quality management principles if you hold a SO 9001 certification. In fact, ISO 9001 is so important and prominent that it serves as the foundation for other industry standards to be developed by groupings of companies, such as AS9100 for the aerospace industry, ISO 13485 for the medical devices sector, and IATF 16949 for the automobile industry.
The ISO 9001:2015 version of the standard is the most recent one. The previous revision, ISO 9001:2008, was replaced by the ISO 9001:2015 standard, which is also referred to as ISO 9001 revision 2015. Many of the procedures from the earlier iteration of the standard are included in this updated revision, which places more emphasis on risk-based thinking and an awareness of the organization’s context. A significant structural modification from the ISO 9001:2008 standard was made to enable this transition; the key clauses of the standard are different between the 2015 and 2008 iterations.
It is a very common question to ask what the purpose of ISO is. ISO is an international organisation that creates a commonly recognised set of requirements and guidelines to assist organisations around the globe to act more consistently. More than 22,450 standards are created, published, and maintained by the ISO organisation through technical committees made up of people from all around the world. These standards offer guidance on how to develop management systems, conduct certain testing, and design and construct products.
ISO does not go around assessing companies on these aforementioned standards. The ISO only participates in the maintenance of the standards; it leaves the evaluation of businesses in relation to the standards to outside certification organisations.
In the next article we will look in depth into the most important requirements of ISO and how to best implement them into your business.
4TC can support you with all the services you need to run your business effectively, from email and domain hosting to fully managing your whole IT infrastructure.
Setting up a great IT infrastructure is just the first step. Keeping it up to date, safe and performing at its peak requires consistent attention.
So we can act as either your IT department or to supplement an existing IT department. We pride ourselves in developing long term relationships that add value to your business with high quality managed support, expert strategic advice, and professional project management.
Small quality-of-life feature could change the way you use Microsoft 365
Microsoft has announced a new, more convenient way to pull images from Android devices, such as smartphones, into documents and spreadsheets made with the web versions of Word and Powerpoint found in Microsoft 365.
In a post on the Office Insiders blog, the company revealed that it will soon be possible for users of Microsoft’s online online collaboration tool to link their Android photo libraries to a Microsoft account using a one-time set up process.
For now, the feature is only available to personal users, in addition to enterprise and education organizations, who have opted into the Office Insiders program, which gives users early access to exciting but experimental new features within the company’s office software suite.
While at a glance this might seem like an exclusive club, with some business and education users may be out of luck due to their organization’s Microsoft Office configuration, it’s not too hard to enrol so long as a user has full control over their system.
For personal users, the Office Insider program is a simple opt-in, to be found on the product information page within Microsoft 365.
Once that’s done, linking your Android photo library to your Microsoft account is a simple process, so long as your Android device has the means to scan a QR code with its camera and install the Link to Windows app on the Google Play Store.
Luckily, these are features that will come as standard on most, if not all, recent business smartphones.
The new feature promises to be robust and intuitive, supporting the same image sizes, dimensions, and file types currently supported by Word and Powerpoint on the web. It will also allow users to replace existing images in documents with those on mobile devices.
The feature will eventually be available to all users of Office on the web who have a Microsoft 365 subscription, use an android phone, and, if they are using the Mozilla Firefox web browser, are using version 104.0 or later.
Source – Your Android smartphone could be your biggest Word or PowerPoint helper | TechRadar
Email: support@4tc.co.uk
Tel: 020 7250 3840
5th Floor, 167‑169 Great Portland Street
London
W1W 5PF
Dew Gates The Street
High Roding
Essex
CM6 1NT