Research Article  Open Access
Jie Liu, "Impact of HighTech Image Formats Based on FullFrame Sensors on Visual Experience and FilmTelevision Production", Wireless Communications and Mobile Computing, vol. 2021, Article ID 9881641, 13 pages, 2021. https://doi.org/10.1155/2021/9881641
Impact of HighTech Image Formats Based on FullFrame Sensors on Visual Experience and FilmTelevision Production
Abstract
Today, the application of hightech image format technology in contemporary visual experience and film and television production has become infinitely mature. In the era of the combination of modern technology and the Internet, virtual numbers connect the past with the future, merge reality, and myth and even synchronize the world between primitive and modern. This article adopts experimental analysis and comparative analysis, setting the experimental group and the reference group aim to use hightech image formats from the perspective of a fullframe sensor to basically realize a perspective screen and a threedimensional screen for observers in indoor scenes. In addition, the process of reconstructing a 3D model using highprecision geometric information and realistic color information is also described. The experimental results show that the sharpness threshold cannot be too small; otherwise, part of the clear image is misjudged as a blurred image. If the threshold is too large, the missed detection of the blurred image will increase. Combined with the subjective evaluation of the image, when the threshold is 0.8, the experimental result is close subjective evaluation, and the missed detection rate is 2.41%. This shows that the ASODVS threedimensional digital scene constructed in this article can meet the needs of realtime image processing; it can effectively evaluate the clarity of realistic analog images. It shows that controlling the size of the coordinate value can affect the user’s visual experience. The smaller the value in a certain range, the clearer the result can be.
1. Introduction
Computerbased 3D reconstruction technology is an emerging application technology, which has huge growth potential and practical value. It can be widely used in urban planning, medical investigation, construction site, geographic research, bone remodeling, cultural relic investigation, crime investigation, and data collection. The reconstruction of 3D models with highprecision geometric information and realistic color information has always been the focus of research in computer interaction, intelligence, scanning molds, graphic drawing, and map drawing. The 3D model reconstruction technology can be applied to different objects. The purpose of the 3D reconstruction of specific objects is to obtain information such as the 3D shape, curved surface, and actual color of the object. 3D scene reconstruction is mainly used to capture the distribution, shape, and actual color of objects in the scene, for example, the reconstruction of cultural and historical sites, the reconstruction of urban scenes, the reconstruction of urban landscapes, natural environment map modeling, and underground scene modeling. 3D scene reconstruction plays an important role in intelligent robot distance sensors, reshaping crime scenes, virtual displays, and many other fields. Its main purpose is to objectively reproduce real scenes and display them on mobile and client displays.
Based on data acquisition methods, commonly used 3D scene reconstruction methods can be divided into passive and active. When performing 3D reconstruction of scenes based on laser scanning, the calculation is complicated, and the reconstruction process cannot be fully automated. Generally, commercial 3D laser scanners are expensive and complicated to operate, which limits its application to a certain extent. The method of reconstructing the scene based on active vision has the characteristics of low cost and high efficiency and is regarded as the most effective method of 3D reconstruction of the scene.
Existing passive 3D reconstruction scenes require a lot of computing resources for point matching and stereo measurement, and there are some “unconditional” calculation problems. Although the obtained model has texture information, the accuracy of the model is low, and depth information is lacking. Existing laser scanners must perform functions such as recording and combining 3D point data during reconstruction. In order to obtain point cloud data with color information, it is usually necessary to have a onetoone correspondence between the threedimensional coordinates and the color information of the spatial points. The existing data input methods are relatively strict, and some methods require prior knowledge of known scenes.
In the early years, Yu used a sequence of images taken by a normal camera to reconstruct the scene. He proposed a factorization method to solve the geometric information of the scene when the object is far away and the problem of calculating camera movement. However, the parallel projection reconstruction method cannot fully restore the realism of the scene when it is applied to the real scene [1]. Zajdel et al. used a multicamera imaging method to capture eight camera video images distributed along the vertical axis, and used GPS and inertial navigation to assist position and motion estimation, and perform 3D urban scene modeling, but this only applies to formal and regular urban scenes. In this imaging method, the precise installation of the camera and the seamless stitching of multiple images have become obstacles to the realization of advanced eye technology [2]. In the field of fisheye lens imaging, Lhuillier has made improvements based on the image capture of common cameras in the home. He used the principle of light reflection and refraction and used multiple cameras to scan and shoot a scene in 360 degrees. For the first time, he made a true 3D reconstruction of the target scene, but his research did not simplify the calculation process and could not solve the problem of obtaining all point cloud data and color information corresponding to the scene from the same sensor at the same time [3].
The innovation of this article is related to: (1) The new ASODVS scanning program is used. The program can simplify most of the calculation work in the experimental design of this article. The input data is loaded in an orderly manner according to the order set by the experimenter. In the process from the field to the cloud, the cloud to the mobile terminal, the data will not be lost or scattered, but ordered in a matrix form. Therefore, the collected two sets of lattice data can be automatically matched according to a certain order, which greatly reduces the amount of calculation and manual operation on the machine side, and greatly improves the efficiency. (2) This article directly simulates the picture observed by the human eye when building the scene, which greatly improves the requirements and standards of the experiment. It also enables the experimenter to produce an immersive observation experience similar to the 3D movie version. The platform construction, parameter setting, and algorithm optimization have successfully completed the process of indoor 3D scene reconstruction [4].
2. Impact of the HighTech Image Format of the FullFrame Sensor on the Visual Experience and Film and Television Production
2.1. Design and Hardware Composition of ASODVS
As shown in Figure 1, the four coordinate systems composed of four devices required for the experiment cannot form a single topological structure in space. Therefore, we need to pair and calibrate different coordinate systems, and only after conversion can we meet the needs of the measurement and reconstruction platform [5]. The objects in the global coordinate system are finally displayed after the transformation of 4 coordinate systems [6]. Formula (1) is a formula linking the relationship between coordinate systems [7]. This formula converts the dot matrix data put into the matrix very well [8].
In terms of hardware composition, the moving surface laser generator and the ODVS rearview sensor form the ASODVS. The overall structure of the system is shown in Figure 2. The four main modules are directional vision sensor, moving surface laser generator, drive unit, and operation unit. Among them, the omnidirectional vision sensor is mainly composed of a hyperboloid mirror, a common condenser, and an imaging chip, which can simultaneously obtain a 180degree platform panorama in the positive direction and a platform panorama with the same angle in the negative direction [9]. The experimental field of vision has been greatly expanded [9] as shown in Figure 2. The moving surface laser generator consists of four green line laser generators [10], which can emit 360degree laser lines in the horizontal direction. The driver model is M542 (4.2 A, 50 V), and the motion controller model is TC4510. The end controller used by the experimenter is actually operated by software, from laser emission, recovery of lattice data [10], uploading to the cloud, data matching, data calculation, and making modeling files to forming an operable panel, all in one go [11, 12].
Establish a coordinate system with the single viewpoint on as the origin, let axis be aligned with the optical axis of the mirror, let be a point in space, is the projection of point on the sensor plane, and is the pixel point of point on the image plane. The process of displaying point is projecting point onto point in the mirror by transforming the projection matrix [13], and point is focused on the optical center point of the camera after reflection and intersects the sensor plane at point and point . After affine transformation, point on the image plane is obtained. The aforementioned singleview mirror camera imaging model describes the spatial points up to the mirror point [14, 15], from the angle of the mirror to the point in the imaging plane, and then to the point in the image plane to form the pixels in the image Point process.
Among them, the conversion relationship from the catadioptric mirror to the sensor plane is shown in formula (2):
In the formula, represents the second coordinate of the space point , is the projection interaction matrix , is the rotation matrix from the space point to the catadioptric mirror point [16], and is the translation matrix from the space point to the catadioptric mirror point .
The formula for converting from the sensor plane to the image plane is as follows [17]. The formula is referring to a extremely truly fact that we can collect the matrix into the collection , and after calculating then, let the answer be added with .
:
Use a function to replace the functions and in formula (2), that is, use the function f to connect the twodimensional lattice and the threedimensional space to obtain the formula (4):
Since the error will be introduced into the actual processing and assembly of the omnidirectional vision sensor, it can be assumed that the ODVS conforms to the ideal model [18], replacing the nonideal model with some errors in the simplified model conversion type, and formula (5) can be obtained:
Formula (5) can be used to establish the correspondence between any pixel on the imaging plane and the angle of incidence Table 1 [19]:

2.2. Moving Surface Laser Transmitter
In the design of 3D perception and information reconstruction system based on active vision, the effective projection of the light source plays a vital role in the structure and accuracy of the point cloud data. In order to adapt to the function of ODVS that can simultaneously shoot 360degree panoramic images of the scene, the mobile surface laser generator must project a laser light source that can cover 360 degrees in the horizontal direction of the scene. It can move up and down in the vertical direction to complete the scan of the scene [20]. Based on this design goal, the following article will first introduce the principle of laser ranging and then explain the specific design of the moving surface laser generator in ASODVS and the assembly method of the moving surface laser light source [21].
Commonly used laser ranging principles are triangle ranging method, timeofflight method, and phase method [20]. The triangulation method is a method of measuring the distance between the target point and the known end point of a fixed reference line, as shown in Figure 3. The three lines in the figure meet the requirements of the triangular distance measurement method. Its included angle can be geometrically calculated to obtain ranging information. The laser scanning system using this method is mainly composed of a laser generator and a CCD camera, which form a spatial plane triangle with the target point, as shown in Figure 3. The emitted light, incident light, and baseline all come from the scanner angle sensor. Its included angle can complete geometric calculations to obtain 3D information.
The timeofflight method and phase ranging method, as the distance measurement principle, and the transmitting and receiving devices of the laser rangefinder must be laser receivers and transmitters, which belong to pointtopoint measurement; while the receiver of the triangulation method is a CCD imaging chip. The CCD imaging chip can obtain planar image information, so that the laser of the triangulation method can use a line laser light source or a surface laser light source. This method not only improves the scanning efficiency of the laser but also effectively reduces the complexity of the system.
The line laser generator is an important part of the moving surface laser generator. Laser light sources of different power levels have different effects on the user’s personal safety. For this reason, this article analyzes the selection and precautions of laser emission power. Under normal circumstances, the greater the laser power, the wider the emission range, but highpower laser generators usually have safety hazards [21]. Laser manufacturers use class I to class IV to indicate the degree of laser damage to the human body from low to high. Class I lasers are not harmful to the human body and are mainly used in laser printers and some experimental equipment. The class II laser has a certain degree of damage to the eyes and can be used for aiming equipment or distance measuring equipment. Class III is a mediumpower laser transmitter, which can be divided into two levels: class IIIA and class IIIB. The laser power of class A is in the range of 1 milliwatt to 5 milliwatts and can emit continuous laser waves. Class B laser power ranges from 5 to 500 milliwatts [22]. It can be used for laser scanners and stereo photography. Eyes must be protected when using class IIIB lasers, and the eyes cannot look directly at the laser light source. Class IV lasers are highpower lasers that can be used in surgery, cutting, and other fields [23].
When designing the hardware, the ODVS and the moving surface laser generator are fixed on the same axis. The ideal assembly situation is that the axis line of the single viewpoint Om of the ODVS and the scanning planes of the 360° laser emitted by the moving surface laser generator are perpendicular to each other. To achieve this goal, we used a hollow cylinder to calibrate ASODVS during assembly. The specific method is as follows: (1) put the ASODVS into the hollow cylinder vertically so that the axis of the ASODVS coincides with the axis of the hollow cylinder, (2) constantly change the distance between the moving surface laser generator and the viewpoint Om of the ODVS, and (3) observe the panoramic sequence images obtained from ODVS.
2.3. Application of ASODVS
If the aperture produced by the projection of the moving surface laser generator in the panoramic sequence image is a series of perfect circles centered on the panoramic image, then the ASODVS configuration is over; otherwise, fine tuning is required to make the ASODVS meet the ideal design requirements. In addition to the observation method, you can also save the panoramic image with laser information generated in the above process and use the algorithm to analyze the image to determine whether the ASODVS has reached the assembly requirements [24, 25].
Figure 4 is the flow chart of the entire system:
When SODVS calculates the threedimensional coordinates of a space point, it first needs to analyze the laser point in the panoramic image. The analysis algorithm of the laser point includes the algorithm based on the HIS color model, the algorithm based on the interframe difference algorithm, and the threeframe difference algorithm. The sampling speed of ASODVS is also closely related. ASODVS inevitably has some errors in the calculation of the threedimensional coordinates of the space point. The maximum error of the distance from the point cloud to the single viewpoint in the statistical space is within 3%. In most cases, the laser generator, the measured object, and the camera are in different spatial positions, which is why the four coordinate systems mentioned above cannot form a coincident, parallel, or vertical relationship. Since the four coordinate systems cannot form a simple relationship, the registration and conversion between coordinate systems is a necessary step in the process of measurement and reconstruction. Objects in the world coordinate system pass through the camera coordinate system, the ideal image coordinate system, and the real image coordinate system and finally are imaged in the digital image coordinate system.
3. Experimental Research on the Impact of HighTech Image Formats of FullFrame Sensors on Visual Experience and Film and Television Production
ASODVS can scan the indoor scene volume and realize the reconstruction of the scene, which is the main software platform used in this experiment. It mainly elaborates the method of acquiring and modeling indoor scene 3D information. With the research and development of computer threedimensional graphics, the pointbased threedimensional model has attracted the attention of many researchers. Therefore, a relatively simple point model method was chosen.
The 3D mesh model is obtained by processing the point cloud data. Generally, it is necessary to construct the topological structure of the 3D point cloud in order to perform neighborhood operations on each point. On this basis, some algorithms can be used to obtain the corresponding 3D mesh model.. The 3D mesh model is the mainstream method of 3D modeling at present. It contains the topological relationship between points and can better reflect the geometric information of the surface of the object. Based on the above considerations, this work uses a 3D point cloud model and a mesh model to reconstruct the 3D internal scene. And because of the classification of the threedimensional point cloud data obtained based on ASODVS, the system can better meet the realtime requirements without constructing a topological structure.
3.1. Preparing
The ODVS in this article uses a pixel USB CMOS camera module and is connected to the microprocessor through a USB interface. The configuration of the microprocessor is as follows: CPU is Pentium 4, memory 2G, discrete graphics card, and 512 M video memory, operating system is Windows 7, and a selfdeveloped 3D panoramic point cloud data acquisition software based on ASODVS is installed in the system. The software is composed of Java and Java3D implementation. We have placed a number of colored obstacles in the indoor scene, and the obstacles are arranged in an irregular manner. In addition to ordinary flat objects, indoor scenes also include curved surfaces and doorshaped parts to increase the richness of the scene.
3.2. Experiment Method
In this paper, an experiment was conducted in a scene of about 25 square meters arranged in the lobby of a certain teaching building of a university, and an indoor scene was built with a fullscale sensor for data collection [26]. Before acquiring the point cloud data, first place the ASODVS and place it in the center of the scene. The azimuth angle of the omnidirectional vision sensor is aligned with the true north direction in the space. At this time, the projection point of the single viewpoint Om on the plane of the ground is the origin (0,0,0) of the threedimensional world coordinate system. In order to make the contour features of the reconstructed model more obvious, we placed a number of colored obstacles in the indoor scene, and the obstacles are arranged in an irregular manner. In addition to ordinary flat objects, indoor scenes also include curved surfaces and doorshaped parts to increase the richness of the scene.
Use ASODVS software to edit algorithm calculations, use different parameters, observe the effect of modeling on the display, and evaluate the impact of different image formats on the visual experience under different parameters.
In the 3D display and drawing of the scene, due to the three characteristics of immersion, interaction, and imagination of the experimental effect, virtual reality technology is adopted. This technology is realized by integrating multiple technologies, such as graphics, digital image processing, and threedimensional modeling. This technology enables people to observe and experience realistic environmental scenes and interact with them.
3.3. Experiment Procedure
Before acquiring the point cloud data, first place the ASODVS and place it in the center of the scene. The sharpness evaluation method is not only an important link to measure the quality of a digital image but also the basis for realizing the automatic focus of a digital imaging system, and it is also an important means of judging the imaging quality of a digital imaging device. The good link of the evaluation system is related to the quality and efficiency of data collection, eliminates the influence of subjective factors in the evaluation process of different people, ensures the consistency of evaluation standards, and facilitates comparison and optimization between different algorithms, so that subjective evaluation is objective and objective. The evaluation is consistent and has high application value. Based on this, this article studies a class of document image sharpness evaluation problems collected by highspeed scanners.
The azimuth of the is aligned with the true north direction in the space. At this time, the projection point of the single viewpoint Om on the plane of the ground is the origin (0,0,0) of the threedimensional world coordinate system.
Figures 5(a)–(c) are the actual images and point cloud models at different angles in different scenes. The red line in the figure represents the scene ground profile. In addition to reading the point cloud data and displaying it in real time, the software system can also perform operations such as rotation, translation, enlargement, and reduction on the obtained model through the keyboard and mouse, as shown in Figure 6.
(a) Arc part
(b) Gate part
(c) Color obstacle part
Compared with Figure 5, it is found that the scanned point cloud data model can basically reflect the edge contour of the scene and can express the distribution, shape, and color information of the objects in the scene. This article refers to the method of displaying Figure 5 as “objectcentric model display,” which means that the entire scene is treated as an object from the outside.
In this paper, an experiment was conducted on the method of establishing a grid model of an ordered point cloud. From the steps of the algorithm, it is known that the points on each scan slice need to be connected with their neighboring points in the specific implementation. In this paper, the geometric shape of the line (line) in Java3D is used to connect the point cloud data to construct a grid. The grid model established by this method does not need to perform normal vector calculation and topological structure establishment operations and generates point cloud data in real time. Since there are several ways to connect quadrilateral meshes into triangular meshes, two triangular mesh models of indoor scenes can be obtained.
Virtual reality technology must have the three characteristics of immersion, interactivity, and imagination, and the relationship is shown in Figure 6. Immersiveness refers to making the observer feel immersed in the scene and environment. Computerrelated technology is used to simulate and approximate the threedimensional scene realistically. In some cases, it needs the help of specific equipment such as 3D glasses and helmets to achieve this. Interactivity means that the observer can operate and interact with the observed scene environment through a computer keyboard, mouse, and other equipment. Imagination means that the observer can have a sense of reality in the virtual environment and feel this virtual space. This article refers to the characteristics of virtual reality technology, namely, immersion, interactivity, and imagination when designing the observercentered 3D panorama rendering. Since the applied active panoramic stereo vision sensor has the characteristics of a single viewpoint, the single viewpoint is used as the origin to establish a spatial coordinate system when data acquisition and reconstruction of indoor scenes are carried out. Therefore, when displaying, you can start from the characteristics of single viewpoint and combine the characteristics of human vision that are rendered in 3D panorama centered on the observer.
According to the principle of ergonomics, the right viewpoint image can be calculated by using the perspective view as the left viewpoint image, and vice versa, if the above generated perspective view is used as the central eye image, left and right viewpoint images need to be generated to realize the generation of a stereo image pair. This article adopts the first method, which takes the abovementioned perspective view result as the left view point, and then constructs the disparity map by obtaining the right view point perspective view to realize stereo display.
Taking the single viewpoint Om (0,0,0) of ASODVS as the coordinates of the left eye viewpoint, applying formula (6) can calculate the coordinates of the space point in the right eye viewpoint, where is the distance between the two eyes and the distance between the two eyes of women. The distance is 5565 mm, the distance between male eyes is 5868 mm, and is used in formula (6).
According to the previous description of the perspective drawing with the observer as the center, we first need to determine the initial azimuth angle corresponding to the viewpoint. Through the spatial relationship between the left and right viewpoints, when the initial azimuth angle corresponding to the left viewpoint is known, we need to determine the initial azimuth angle corresponding to the left viewpoint. Calculate the initial azimuth corresponding to the right viewpoint. The observercentered stereo map drawing algorithm is specifically described as follows:
Step 1. Determine the respective initial azimuth angles of the left and right viewpoints and read the minimum incident angle and the maximum incident angle.
Step 2. .
Step 3. Perform perspective view display calculation for the point cloud data to be displayed corresponding to the left and right viewpoints and obtain two perspective display matrices PL1 and PR1. The calculation method will not be repeated.
Step 4. Judge whether to display the grid, if the judgment result is yes, then construct the grid model, otherwise, continue.
Step 5. Determine whether is established, if the result is not established, go to step 3, if the result is established, continue.
Step 6. Assign the color value of the point cloud matrix PL1 calculated in step 3 to the in the corresponding PR1.
Step 7. End, get a new matrix PS of size .
Based on the perspective display drawing of the indoor scene model, through the red and blue stereo display technology, the 3D stereo display output rendering of the indoor scene can be obtained, which is similar to the perspective display drawing. It is also possible to perform a perspective transformation on the basis of a stereoscopic display to realize a stereoscopic display of a panoramic scene. Observation with the aid of red and blue 3D glasses can achieve 3D stereoscopic display of indoor scenes. The stereoscopic display in this article can achieve a certain 3D effect, but it cannot completely simulate the fine stereoscopic effects such as movies.
4. Impact of HighTech Image Formats of FullFrame Sensors on Visual Experience and Film and Television Production
4.1. Actual Results after Using Different Parameters
In order to better test the impact of hightech image formats on visual experience and film and television production, we designed the experimental group () and the control group ( and ), respectively, using different lattice data parameter settings, using the ASODVS experimental device. The software conducts simulation analysis and obtains the scene pictures of the other two groups of reference groups.
As shown in Figure 7(b), the point cloud data in the red box is missing, and this part of the missing has a certain shape; so, it can be judged that the laser line is blocked. By analyzing the shape of each part of the scene, it can be known that this part of the loss is caused by the shielding of the line laser by the support rod of the device itself.
(a)
(b)
As shown in Figures 7 and 8 above, the resulting 3D modeling diagrams show varying degrees of model loss and sharp reduction due to missing data and parameter settings beyond the range, which is not in line with experimental expectations.
(a)
(b)
4.2. Parameter Comparison
In order to explore the impact of different lattice parameters on the clarity of 3D modeling, we set a total of five sets of 3D point cloud data as shown in Table 2:

Two rows of data represent a point, the first row is the threedimensional space coordinates (, , ) of the point, and the second row is the color value (, , ) of the point. The color mode is a color standard in the industry. It obtains a variety of colors by changing the three color channels of red (), green (), and blue () and their superpositions. Yes, is the color representing the three channels of red, green, and blue. This standard includes almost all the colors that human vision can perceive, and it is one of the most widely used color systems at present.
All six attribute values are saved when stored in one class. It is worth noting that due to the limitation of the size of the site space when selecting the points, the laser line emitted by ASODVS is located on the horizontal plane; so, it is impossible to calculate the threedimensional coordinates of the spatial points on the horizontal plane. In other words, the coordinate in the collected point cloud data cannot be set. Therefore, the value of the coordinate in the five sets of threedimensional point cloud data we selected is set to 30 unchanged.
After putting the set data into the matrix calculation and comparison, make a statistical graph according to the software clarity score statistics as shown in Figure 9. The 3D point cloud data obtained by ASODVS is saved in a text file. Some examples of point cloud data are as follows:
252.76, 11.12, 30.0,
0.5490196078431373, 0.5490196078431373, 0.6470588235294118
252.86, 12.72, 30.0.
0.6470588235294118, 0.6470588235294118, 0.7490196078431373
252.77, 14.31, 30.0.
0.6470588235294118, 0.6470588235294118, 0.7490196078431373
:
It can be seen from the statistical graph that the sharpness data is closely related to the value when the coordinate does not change in the lattice coordinates. When remains 30.00 and changes from 252.78 to 260.77, the sharpness data of the produced model remains 0.779.
When remains 30.00 and changes from 252.78 to 260.77, the sharpness data of the produced model remains 0.779. Therefore, when using the dot matrix model set by ASODVS, the abscissa () of the collected data points has no effect on the sharpness. The 3D point cloud model of the scene does not show these horizontal planes of the scene. The reason for this problem is that the laser line emitted by ASODVS is located on the horizontal plane; so, it is impossible to calculate the threedimensional coordinates of the spatial points on the horizontal plane. This problem can be solved by improving the hardware of ASODVS.
4.3. Relationship between the Coordinate of 3D Point Cloud Data and the Clarity of Visual Experience
Based on the above research results, we ignore the two parts of coordinate and coordinate when collecting data and focus on analyzing the impact of the coordinate in the five sets of data on the clarity of the visual experience.
After rearranging the data, make a statistical graph based on the software clarity score again, as shown in Figure 10.
Analyzing Figure 10, it can be seen that when the coordinate increases linearly from 11.13 to 17.57, the sharpness decreases from 0.779 (very high sharpness) to 0.021. The definition evaluation score range used in this article is 00.2 ambiguous, 0.20.4 can be seen clearly, 0.40.6 is clearly visible, 0.60.8 is very clear, and 0.81 is ultrahigh definition.
This shows that changing the value of the coordinate within a certain range can control the decrease of the visual clarity experience of 3D modeling. It is worth mentioning that in the data of data 1, the clarity has reached an astonishing 0.779, which basically simulates human vision to display the 3D reconstruction results of the scene. Through the combination of objectcentered and observercentered display technologies, the inside and outside panorama of the scene is better integrated and displayed, and the observer can experience and appreciate the threedimensional digital scene from multiple angles and omnidirectional under the secondary parameters.
In order to test the effectiveness of the evaluation algorithm, we evaluated 4000 onsite document images. The image was scanned using Belllink’s copi8000, 100 dpi, and 256level grayscale, which contained 267 fuzzy images. The experiment has obtained relatively ideal results. To detect blurred images as an example, different threshold parameters can be selected to evaluate the sharpness of the image. The experimental results are shown in Table 3. The sharpness evaluation is as follows:

As shown in Table 3, the experimental results show that the sharpness threshold cannot be too small; otherwise, part of the clear image is misjudged as a blurred image. If the threshold is too large, the missed detection of the blurred image will increase. Combined with the subjective evaluation of the image, when the threshold is at 0.8, the experimental result is close to the subjective evaluation, and the missed detection rate is 2.41%. This shows that the ASODVS 3D digital scene constructed in this paper can meet the needs of realtime image processing; it can effectively evaluate the clarity of realistic analog images.
In order to further illustrate the superiority of the method proposed in this article, this article compared with other general quality assessment methods without reference, including BIQI, BRISQUE, DESIQUE, CORNIA, DIIVINE, QAC and SSEQ. Plot the test data in Figure 11.
It can be seen from Figure 11 that the overall performance of the methods related to BID and CID camera image databases in this chapter is better than these unreferenced conventional quality evaluation methods. In the BID database, when the PLCC evaluation index is used, the BIQI fuzzy score of the universal nonreference treatment evaluation method is 0.442, while the fuzzy score of the picture processed by the ASODVS threedimensional modeling software rises to 0.6483.
5. Conclusions
Experimental research shows that the ASODVS designed in this paper can quickly obtain the threedimensional coordinate data and color data of the surface of all measured objects in the panoramic range at one time and can obtain the threedimensional point cloud model and grid model of the scene based on this information. When displaying the reconstruction results of the scene, the observercentered method is adopted, so that the observer can experience the scene immersively. Experiments show that the sharpness is only related to the abscissa of the dot matrix in the parameter setting. When the coordinate increases from 11.13 to 17.57, the sharpness decreases from the highest 0.779 to 0.021; so, the coordinate should be reduced as much as possible within the range to increase visual experience and immersion. The characteristics and innovations of this article are when using ASODVS to reconstruct the indoor scene in 3D, only need to start the software system and start the scanning thread to scan the scene, the program operation is simple, and the degree of automation is high. When collecting point cloud data, ASODVS always takes the single viewpoint Om as the coordinate origin. Therefore, there is no point cloud overlap, density difference, and data redundancy, and no more complicated calculation steps, such as point cloud data recording, are required. The disadvantage of this paper is that the depth measurement accuracy of space object points is not high. An ultrahighdefinition imaging chip can be used to collect images to solve this problem. And the direction of the laser light source emitted by the current moving surface laser generator is parallel to the horizontal plane; so, it is impossible to effectively obtain the data (i.e., the coordinate) that is also on the horizontal plane (such as the desktop). The projection direction of the laser light source can be changed so that the scanning of the threedimensional laser can cover the entire vertical field of view, as well as the 3D reconstruction of the point cloud data obtained directly from ASODVS in this article, and the density of the point cloud data in the scene has nothing to do with the surface shape, which also affects the storage and modeling efficiency of the point cloud data to a certain extent. Future research will be based on the surface shape of the reconstructed object, intelligently identify the plane and complex surface of the reconstructed object, and select a suitable texture mapping and appropriate triangular mesh model to be accurate, true, fast, complete, and realistic. Simplicity and reliability have achieved simultaneous development.
Data Availability
No data were used to support this study.
Conflicts of Interest
The author declares that there are no conflicts of interest regarding the publication of this article.
References
 L. Yu and B. Pan, “Fullframe, highspeed 3D shape and deformation measurements using stereo digital image correlation and a single color highspeed camera,” Optics & Lasers in Engineering, vol. 95, no. 8, pp. 17–25, 2017. View at: Publisher Site  Google Scholar
 R. Zajdel, K. Sośnica, M. Drożdżewski, G. Bury, and D. Strugarek, “Impact of network constraining on the terrestrial reference frame realization based on SLR observations to LAGEOS,” Journal of Geodesy, vol. 93, no. 11, pp. 2293–2313, 2019. View at: Publisher Site  Google Scholar
 T. Xie, K. Wang, R. Li, and X. Tang, “Visual robot relocalization based on multitask CNN and imagesimilarity strategy,” Sensors, vol. 20, no. 23, p. 6943, 2020. View at: Publisher Site  Google Scholar
 S. Kabir and C. Smith, “optimization of CMOS image sensor utilizing variable temporal multisampling partial transfer technique to achieve fullframe high dynamic range with superior low light and stop motion capability,” Journal of Electronic Imaging, vol. 27, no. 2, pp. 1–4, 2018. View at: Publisher Site  Google Scholar
 S. Kabir, C. Smith, F. Armstrong et al., “Optimization of CMOS image sensor utilizing variable temporal multisampling partial transfer technique to achieve fullframe high dynamic range with superior low light and stop motion capability,” Electronic Imaging, vol. 2017, no. 11, pp. 52–63, 2017. View at: Publisher Site  Google Scholar
 L. Shen, X. Feng, Y. Zhang, M. Shi, D. Zhu, and Z. Wang, “Stroboscope based synchronization of full frame CCD sensors,” Sensors, vol. 17, no. 4, pp. 799–802, 2017. View at: Publisher Site  Google Scholar
 X. Ren, P. W. Connolly, A. Halimi et al., “Highresolution depth profiling using a rangegated CMOS SPAD quanta image sensor,” Optics Express, vol. 26, no. 5, pp. 5541–5557, 2018. View at: Publisher Site  Google Scholar
 D. Kim, M. Song, B. Choe, and S. Y. Kim, “A multiresolution mode CMOS image sensor with a novel twostep singleslope ADC for intelligent surveillance systems,” Sensors, vol. 17, no. 7, pp. 1497–1499, 2017. View at: Google Scholar
 S. R. Soomro, E. Ulusoy, and H. Urey, “Decoupling of real and digital content in projectionbased augmented reality systems using time multiplexed image capture,” Journal of Imaging ence & Technology, vol. 61, no. 1, p. 10406, 2017. View at: Google Scholar
 M. A. Mazhar, C. Russell, M. Amiri, and N. A. Riza, “CAOS line camera,” Applied Optics, vol. 58, no. 33, pp. 9154–9162, 2019. View at: Publisher Site  Google Scholar
 T. G. Vaikhanskaya, L. N. Sivitskaya, T. V. Kurushko et al., “Noncompaction cardiomyopathy. Part II: limitations of imaging techniques and genetic screening, clinical observations,” Russian Journal of Cardiology, vol. 25, no. 12, pp. 3873–3875, 2020. View at: Google Scholar
 S. Zhang, H. Li, Q. Yao et al., “A unique twophoton fluorescent probe based on ICT mechanism for imaging palladium in living cells and mice,” Chinese Chemical Letters, vol. 31, no. 11, pp. 2913–2916, 2020. View at: Google Scholar
 D. Lee, “Thermal imaging drones,” Law and order, vol. 64, no. 5, pp. 20–22, 2016. View at: Google Scholar
 U. S. Hort, G. Kocaolu et al., “Gastrointestinal stromal tumor of the ileum mimicking adnexal mass,” Ege Tıp Dergisi, vol. 59, no. 3, pp. 232–234, 2020. View at: Google Scholar
 S. A. Korotkih, E. V. Sabadash, I. D. Medvinskiy et al., “Diagnostics of early signs of ocular pathology in patients with HIV/tuberculosis coinfection,” Vestnik Oftalmologii, vol. 135, no. 5, pp. 61–65, 2019. View at: Google Scholar
 V. S. Bereznitskiy and S. A. Alexandrova, “Multimodal diagnosis of right ventricular hamartoma,” Vestnik Rentgenologii i Radiologii, vol. 101, no. 6, pp. 354–357, 2021. View at: Google Scholar
 V. I. Kuplevatskiy, M. A. Cherkashin, D. A. Roshchin, N. A. Berezina, and N. A. Vorob’ev, “Prostate biopsy under magnetic resonance imaging guidance,” Vestnik Rentgenologii i Radiologii, vol. 97, no. 1, pp. 48–55, 2016. View at: Publisher Site  Google Scholar
 K. J. GH, N. S. van den Berg, J. de Jong et al., “Multimodal hybrid imaging agents for sentinel node mapping as a means to (re)connect nuclear medicine to advances made in robotassisted surgery,” European Journal of Nuclear Medicine and Molecular Imaging, vol. 43, no. 7, pp. 1278–1287, 2016. View at: Google Scholar
 J. Lyon, “MacArthur winner adapts hightech medicine to third world,” JAMA, vol. 317, no. 7, pp. 683–685, 2017. View at: Publisher Site  Google Scholar
 K. Danilovskiy, V. Glinskikh, and O. Nechaev, “Evaluation of the BKS LWD tool spatial resolution based on the numerical simulation results,” Interexpo GEOSiberia, vol. 2, no. 3, pp. 89–94, 2019. View at: Publisher Site  Google Scholar
 C. Fergo, J. Burcharth, H. C. Pommergaard, N. Kildebro, and J. Rosenberg, “Threedimensional laparoscopy vs 2dimensional laparoscopy with high definition technology for abdominal surgery: a systematic review,” American Journal of Surgery, vol. 213, no. 1, pp. 159–170, 2017. View at: Google Scholar
 D. Ahlberg, V. Tassion, and A. Teixeira, “Sharpness of the phase transition for continuum percolation in R^2,” Probability Theory and Related Fields, vol. 2, pp. 1–57, 2016. View at: Google Scholar
 S. Ziesche, “Sharpness of the phase transition and lower bounds for the critical intensity in continuum percolation on R~d,” Annales de L'institut Henri Poincare, vol. 54, no. 2, pp. 866–878, 2018. View at: Google Scholar
 J. Uthayakumar, M. Elhoseny, and K. Shankar, “Highly Reliable and LowComplexity Image Compression Scheme using Neighborhood Correlation Sequence Algorithm in WSN,” IEEE Transactions on Reliability, vol. 69, no. 4, pp. 1398–1423, 2020. View at: Publisher Site  Google Scholar
 K. Geetha, V. Anitha, M. Elhoseny, S. Kathiresan, P. Shamsolmoali, and M. M. Selim, “An evolutionary lion optimization algorithmbased image compression technique for biomedical applications,” Expert Systems, vol. 38, no. 1, article e12508, 2020. View at: Google Scholar
 W. Elsayed, M. Elhoseny, S. Sabbeh, and A. Riad, “Selfmaintenance model for wireless sensor networks,” Computers & Electrical Engineering, vol. 70, pp. 799–812, 2018. View at: Publisher Site  Google Scholar
Copyright
Copyright © 2021 Jie Liu. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.