Using Azure to construct recent architecture for visualize training in real-time

Background/Objectives: In recent years, visualization systems were not only entertainment but also essential in training in our various fields. They do not depend on fixed devices only. They use visual systems, such as the headmounted display that Microsoft developed as the Mixed Reality (MR) HoloLens. Its features are equipped with engines that allow the user to interact with the headset via oral orders to communicate remotely with specialists in the surgical field in real-life situations. The main objective of the study is to use the 3D anatomical information models, the Digital Imaging and Communications in Medicine (DICOM) file and all the patient's data, its registration on Azure cloud computing system, to obtain the necessary training and support in case of encountering any emergency before and during the surgical planning. Method: This study presents the application as divided into two stages of anatomical simulation: training for local and international trainees through MR. The first stage classified (DICOM) files to the 3D model using the machine learning and HoloLens emulator of anatomy operational structure. The second stage involves Microsoft Azure and stores on cloud network by Data Lake, Azure Cosmo DB, and utilization of the Azure Spatial Anchors service to access the trainee to locate it at any time through the ID that is displayed by the IoT Sensor. Finding: This study examines Mixed Reality technology, HoloLens, and Head Mount Display to show the expected potential results to improve the surgeon's actions in surgery. This examination finished by 3D displaying anatomical models because they allow the surgeon to access the best solutions in real-life situations during the process to assess the three-dimensional holograms related to patient imaging or surgical techniques.


Introduction
During recent years, there has been a significant development in visual systems, data storage, and machine learning. Mixed Reality technology has been used in surgeries using HoloLens Emulator (HMD) to store and exchange data with other professors via Skype (1) . However, Microsoft developed HoloLens and the Windows Azure system and IoT using mixed reality technology in Cloud Computing. The system supported saving the information and using it at any time, exchanging it with specialized doctors around the world, and training before and during the operation and received support simultaneously via assistant surgery using IoT Systems. This system has not been used before with mixed reality technology. It is one of the gaps and reasons for carrying out this study.
Mixed reality systems are integrated between virtual objects and real objects environment together to enhance the real with the virtual objects in a manner that will allow a user to interact with the natural environment and see the same scale (1: 1) (2,3) . MR is a multi-function system that allows us to control and mix signal processing, computer vision (CV), computer graphics (CG), user interfaces(UI), visualization, screen design, and sensors (4) . Mixed Reality systems (MR) can be defined as a combination of the realworld environment and virtual objects to be used in the digital world to work to provide interactions and co-existence between those elements. This complicated approach uses a specific model to support three-dimensional elements within the real Reality as they participate in the formation of a specific task (5) . However, the boundaries between these newly realities, technologies, and experiences have not yet been established by researchers and practitioners to move a improve knowledge of these concepts and perfect technological (embodiment), psychological (personality), and behavioral (interactivity) perspectives to design new systematics of technologies, (6) . MR a new part of simulation and visualization techniques, will be used in multi projects in all worlds focused on training online learning and course design (7) , Digital Learning Laboratory (DLL) (8) . It allows users to train and communicate through the internet in online environments called Remote Access Laboratories (RAL) (9) . Mixed reality technology is essential in remote training through workstations and allows access from multiple places worldwide that is one of the main problems. It requires the preparation of the following equipment (Central Web Server-Experiment Server-Web Hosting-Firewall) high cost and internet speeds. Hence, it is possible depending on the availability of data on the Network Cloud and window azure services. To prove that Windows Azure in mixed reality services to facilitate tracking of 3D models through Azure Spatial Anchors (ASA) and Azure Digital Twins (ADT) (10,11) . To create spatial intelligence, relationships between people, spaces, and devices can query data from a physical space rather than from many sensors. (Table 1): (6) VR, Virtual Reality can be defined as interactive and generated within computer graphics, simulated environment, to experience a world that does not have a real-world. It incorporates mainly auditory and visual feedback and uses virtual reality headsets to generate realistic images and surroundings that simulate a user's physical presence in a virtual or imaginary environment.

The use of the Mixed Reality Concept through different visualizations definition as follows
AV, Augmented virtually can be defined as augmenting real-life notions into the virtual world and refers to the merging of real objects into virtual worlds, within the category of part mixed Reality.
AR, Augmented Reality can be defined as the experience of real-world environment interaction with computer-generated elements where "real-world" elements are "augmented" by cognitive information that is influenced by multiple sensory, visual, auditory, by use of digital technology. MR, Mixed Reality can be defined as a mix of the virtual and real objects to be more specific. It is a mix of Augmented and Virtual Reality to provide more experience to add new value in training. It is interactive to a point where many applications are used.
Microsoft HoloLens is an excellent example of Mixed Reality. Users help navigate the real world using real-world concepts such as "rotation, navigation, and depth" used by virtual objects to get a better experience.
XR, X Reality ( Figure 1) as one of the forms of "mixed reality environment, " wherein its application contains several tools that allow the creation of Virtual Reality systems (VR), Mixed Reality (MR), and Augmented Reality (AR). It includes sensor modules and reaction units Force Feedback (FFB). X system which helps users to create a new system of Reality by interacting with digital elements in the real world.

General Structure Interaction Forms within the Virtual Environment
In Figure 2, we can see the integration of the virtual elements generated by the computer in its threedimensional models and their appearance inside the virtual environment, and the elements available as a basic need for combining these elements with the surrounding environment. The real environment is often called the virtual environment (VE), which provides the actual workspace, from the use of headmounted-display or mobile phone devices. It is known as (Augmented Reality) and is a process without a reaction by the user. The data bank, DB-Assets, and all elements necessary to design the information model are available. The user can reach a digital interface (Graphical User Interface -GUI), through which he can control the model of transactions within the real environment through communication tools and a reaction (forcefeed-back) called "mixed reality. " (2,12)

Mixed Reality Toolkit [METK]
Mixed Reality, as one of the software platforms, is aimed at the application developer to streamline its programming functions. This platform includes a suite of Windows environment SDK, consisting of a set of C Classes grouped into modules, the platform is known as the MX Toolkit (13,14) . As we know, MR Toolkit contains many elements/algorithms within the software platform, which are the following ten major in (  Table 2).

Microsoft Azure -Mixed Reality
Azure (10) is a novel services created by Microsoft as cloud computing for deploying, testing, building, and application services management through a data center. It provides infrastructure, platform, software services, and support many deferent language programming and frameworks, including Microsoft tools family and third-party software. Azure Spatial Anchors (ASA) and Azure Digital Twins (ADT) is a part of Microsoft azure Mixed Reality services. MR is a combination between the physical world and the digital world. The digital information in mixed Reality using holograms objects was building and focused on sound and light shown around the space. The Spatial Contrast Enhancement in Figure 3 described the area within the red and black curves. Only contrasts versus spatial frequency in the image that fall above the contrast threshold function would be visible to the person. As shown by the solid green line, a regular face at 10 ft has high contrast at low spatial frequencies and 1/f drop in contrast with increasing spatial frequency. For the low vision patient, all information in the face image greater than three cycles/ degree is invisible. If the face image is magnified by 5, its contrast threshold function will shift to the left by 0.7 log cycles/degree (dashed green line) (15) . This technology builds through artificial intelligence (AI) to create spatial intelligence relationships between people, spaces, and devices. It can query data from a physical space rather than from many different sensors ( Figure 4). Moreover, stored extensive medical data called DICOM files, which were loaded and processed in a cloud unit services (Azure Graphic Processing Unit), to make DICOM data available on the headset by a dedicated holographic application (1) . "Black Curve: The curve and area within represents contrast visible to the normal person. "  https://www.indjst.org/

Literary Review
The mixed reality technology uses many fields, including the medical and surgical field, where it turns to be a technology-based on augmented reality technology. However, that is support in implementation, and based on the programming languages C ++ classes, and the 3D-models, and here we review some research in the MR field for example:

Mixed Reality technology & HMD
In (3) the author presented the use of enhanced reality technology and its relevance to the use of HMDs, which help to display sufficient spatial information to help make appropriate decisions. It has taken a suitable process of normalization, namely, crisis and disaster management. The upper management of water helps predict the hydrological models of water surface models during combining them with real images to identify the problem and how far they are protected by creating mobile water protection walls. In (5) the author presented the components and interactions of mixed reality technology that integrates the virtual elements with Reality and the real world, and the use of high-speed information transfer technology to link the hybrid reality system with the hologram to communicate with the world.
In (16) , the three-dimensional model designed to display on a high-resolution screen with KUMMERZ's visual mark-up and key user interfaces. The system uses a camera with a lens angle that works on the IDS image and converts it into a three-dimensional model and exports it to Cinema 4D. Processed and then exported to Unity.
In (17) presented the use of mixed-reality technology. The use of equipment demonstrates the integration of virtual elements and real environment, in terms of studying sound effects, motion, reverse reverberation, and reflexes.
In (18) the author presented a solution to interaction audiences' problems with their potential surroundings by virtual removal of the headset and showing the face underneath it using a union of 3D vision, machine learning, and graphics techniques.

Mixed Reality In Medical Field
In (19) the author presented the use of MR-HMD in the intestines' surgery by using surgical imaging, providing information to surgeons, and displaying display and tracking technology. Biomechanical modeling and object recognition algorithms will facilitate MR-HMD in bowel surgery through the conversion of X-ray into a three-dimensional model.
In (17) the author presented stored extensive medical data called DICOM files to a cloud unit to show during the surgery through dedicated holographic application when loading, analysis, and process data in the Azure Graphic Processing Unit.
In (20) the author implemented a method using volume image and the surface of the patient's body as a 3D object viewed from three various angles. The bulk rendering showed the 3D object with a focus on the surface only. It is not the potential to view the internal framework with this visualization system. The internal framework can notice with three various angles after decreasing the opacity rate. Several positions and parts can view with higher quality. Spinal cord, liver, and kidneys can be perceived accurately in 3D structures. Rendered object viewed from three different angles using the first dataset. https://www.indjst.org/

Mixed Reality Tool Kit
In (13,14) provided a significant comparison between the MR Toolkit and the AR Toolkit, demonstrating the extent to which MR is used and adopted on AR tools where it relies on the mark-up approach. MR tools work to normalize a higher level of technology by combining the virtual bond with the real environment. Both systems contain the onset of C ++ classes A brief overview of the Windows Environment Software Development Kit (SDK).
In (21) connected a group of users to raise reflections on their use of the DART system and the tools they produced, their subsequent ARMR authoring, their thoughts on the challenges of AR-MR in general, and the state of new tools.
In (22) presented Virtual Touch, a toolkit that lets the development of educational performance by a mixed reality environment, using different tangible elements, the relation of a virtual with the real world made available.

Tracking Sensor in surgery
In (4) presented the use of tracking systems, sensors, and RFID tags in tracking, using MR technology with a comprehensive set of images on the patient, and using mobile phones with advanced technology in surgery in collaboration with AR.

Augmented Reality & 3D Anatomy
In (23) , they have presented AR to explain a ground-breaking development in the condition of imageguided surgery (IGS). With computer-generated data derived from radiological images, they presented the AR system's use and value in open surgery and challenges REGARDING the implementation of each component of the AR system.
In (24) presented a framework 3D Anatomy Model to help researchers interested in 3D models to improve student's learning expertise in undergraduate medical, allied healthcare curricula.

The General Framework for the Main Steps of Anatomy:
The proposed flowchart design to improve the performance of anatomy based on the mixed reality technique. The proposed application aims to use Mixed Reality to help trainees retrieve the anatomy information about each member within the human body through co-existence within the virtual and real environment of the operating room and their correlation with the imaginary elements of the anatomy. Stored medical data on cloud network and files for a patient called DICOM files contains many pictures after convert to 3d model and process some edit. Share MR Content in the real world and return to it on an ongoing foundation for training scenarios and design review. Azure Spatial Anchors (ASA) support trainer the best understanding of their information and medical data, where and when they need it. That is through connecting digital content to physical anchors of interest and bringing 3D to make decisions with our team to support internal/external location and Foreigner surgeon/trainer. Through the combination of transformation, the rotation matrix, and Azure Spatial Anchors to determine model location (0,0) Axis. Azure IoT is the primary system to build contextually aware solutions of the training through the use of Azure Digital Twins (ADT) service; it is an IoT that helps to create global models of physical and real environments and build spatial intelligence graphs. Interactions between people, places, and devices, solution by integrating IoT service and artificial intelligence, and protect sensitive data from Azure shown in Figure 5.

Architecture Explanation
1. The trainee authenticates to the manage web service and assigns the name of the space where it has located the object model in the Azure Digital Twins. 2. The trainee's web service (making safety and trust connection authenticates itself to Azure AD. 3. The Azure AD token is then sent to the Azure Spatial Anchors (SA) service to call an access token for the trainee to use later. 4. The app service restores information about the IoT sensors show in the anchor IDs they correspond to the area specified by the trainee and returns IOT sensor IDs, to in Azure Spatial Anchors. 5. The trainee application completes a visual scan of the environment and retrieves its position in the area. Using the nearby API of Azure Spatial Anchors, it retrieves the position of all nearby anchors. 6. Azure Data Factory used for compilation medical image data Securely. 7. Azure Data Lake Store used securely to store medical image data. 8. The trainee requests the app to view requests IoT sensor data and controls to be displayed as 3D in the world free space, where the sensors located, making it easy for the operator to detect and fix any issues. The data is stored and recalled from the app's web service Azure Cosmos DB. 9. Azure Cognitive Services API or a Machine Learning model is used to Analyze medical image data. 10. Azure Data Lake used to store and Secure results of and Machine Learning model and artificial intelligence (AI). 11. Azure Digital Twins locate axis it to Event Hubs trigger when data updated from IoT sensor. 12. To change and update data in Azure Cosmos, DB should use Azure Functions an Event Hubs trigger to process data. 13. The trainee can then retrace the exact steps of the expert medical image who exported and classified the procedure and view holographic videos or 3d models (25) of each step at the right location in the lab. 14. The metadata of the 3D model and medical data are stored on Azure Cosmos DB after encoded and prepared for viewing by media service.

Camera Rotation & Translation Model
To demonstrate a good camera, we want to add to the ray-casting algorithm rule information round the camera lens feature, namely the actual parameters. We use the pinhole camera model to term the picture conquest process, which in general use to parameterize a large number of cameras. The pinhole camera model locates the geometrical relation between a 3D point and its 2D conformable projection onto the image plane. We refer to the center of the perspective projection " the Point intersect rays" as the optical or camera center. Camera model of 3D set the line vertical to the image plane crossing through the optical center as the optical axis, The perspective projection (X, Y, Z) points onto the image plane (x,y), and described as follows (26,27) : Where f is the focal length of the camera, which means The distance between the pinhole and the image plane.
To Explain the full camera model as "Intrinsic/Extrinsic Parameters" can be represented with the following relation Where : Π 0 : Projection Matrix k: intrinsic camera parameters https://www.indjst.org/ f : Camera focal length S x and S y : a relative aspect of each pixel 0 x and 0 y : Image Center coordinates g: Pose of camera R: Matrix Rotation 3X3 T : Vector in R 3 These showing rotation and translation of the camera related to the world coordinate. We should calibrate the camera to get all the parameters of the camera model.
A basic rotation is one of the axes of a coordinate system. Matrices rotate vectors by an angle θ based on three axes rotation x-, y-, or z-axis Figure 6. The following matrix shows the direction of the 3D models' movement in 3-axis during the use of the mixed reality system and moving the models (26,27) .

Results & Analysis
After discussing the surgical training program's details in the previous section through the flowchart, we will explain in this section the implementation of results with symbols and illustrations, which can benefit any surgeon or trainee who graduated the study of any member of the body according to the needs and condition of the patient.
According to Google trend (28) ( Figure 7) for technical use Mixed Reality And Windows Azure technology at a specific time from 2018 to 2019, it is at a steady spread rate from the beginning of 2018 until 2019. The rate of using mixed reality technology began to decline until the end of the year and then began to rise indicators at the end of the year with Windows Azure. That makes us use a combination of using the latest technologies to create advanced medical simulations. In this section, the implementation phase of the program designed in the previous section goes through several stages of design of 3D models using Autodesk Maya program, taking into account that it is of high quality in design and a low poly Figure 8 for ease of upload. Moreover, transfer to the next stage is the use of a simulation program and a unity game engine with [MR TOOLKIT] (29) . Azure Spatial Anchors has been used to design a mixed-reality program using C# and HoloLens emulator to create a list of models (  Figure 9-A). That is, use tools to transform model in directions in X-Axis and Y-Axis Figure 10 by using rotation matrices as shown above Equ3 and choose the model to be studied within the same simulation screen ( Figure 9 -B). Three essential elements must be met in the system of training in Mixed Reality when used the HoloLens VAR model: Visual Vision, Force-Feed-Back (FFB), and Acoustic Figure 11. According to the elements mentioned in MRTKIT above shown in ( Table 2), we can show some scripts used in it ( Table 3):

Conclusion and Future Work
Mixed Reality is a suitable technique for the use of co-existence within the real environment with imaginary models, helping the person to identify places and information by Azure Spatial Anchors. It classifies, develops the medical image, store and retrieve by Azure Cosmo DB, using IoT services to create universal models of physical environments by Azure Digital Twins. Microsoft HoloLens devices can coexist within the real environment and present their imaginary models. This work aims to demonstrate the potential use of mixed reality technique work by enabling training applications for anatomy study in surgery and provide information to the surgeon through display objects and modeling recognitions HoloLens emulator/Device. The proposed system can compare with previous similar studies about components used ( Table 4): • Use real surgical instruments and connect them with a set of sensors to feel the reaction.