Processed image may be cropped, magnified, or otherwise altered in some way based on the position or location of user or some part of user , such as user's head In one embodiment, user-facing detector detects the location of user's head and adjusts image detected by scene-facing detector to generate processed image For example, user may be wearing a helmet with communications components capable of transmitting messages to device and components configured to detect or determine user 's position or location.
All such means of determining a user's position or location are contemplated, and examples of such means will be discussed in more detail herein. The location of a user or a part of a user, such as the user's head or the user's eyes, may be determined using any effective method. Positioning of a user in the context of a dynamic perspective video window may be a function of determining the location of the scene facing detector in space relative to observed landmarks, the location of the display relative to the scene facing detector typically a fixed constant , the location of the user facing detector relative to the display typically also fixed , and finally the location of the user's eyes relative to the user facing detector.
Alternatively, a user may have affixed upon the user light-emitting glasses, detectable tags, or other implements that allow the detection of the user or one or more parts of the user. For example, the user may have adhesive dots attached to the user's head near the eyes that are detectable by a specific form of detector, such as a detector configured to detect a specific form of radiation emitted by the adhesive dots. The detection of these dots may be used to determine the location of the user's eyes.
Other methods may be used instead, or in conjunction with, these methods.
One Boy’s Dream Vacation to See Giant Construction Equipment
Any method or means capable of providing data that may be used to determine the location, proximity, or any other characteristic of a user or a user's location is contemplated as within the scope of the present disclosure. In one embodiment, an augmented reality system may be implemented in a helmet, headgear, or eyewear. The location of the user's eyes may be determined by assuming that the user's eyes are proximate to the display s that are set into the area in the helmet, headgear, or eyewear that would normally be proximate to the eyes when the helmet, headgear, or eyewear is affixed to or worn by a user.
For example, in an augmented reality system implemented in eyewear with displays set into or proximate to where eyeglass lenses would normally be situated, the system may assume that the user's eyes are just behind the displays. Similarly, in a helmet-implemented system, the system may assume that the user's eyes are proximate to an eye-covering portion of the helmet. Other configurations and implementations that determine eye locations or the locations of other parts of a user based on the location of a part of the system assumed to be proximate to the user or a part of the user are contemplated as within the scope of the present disclosure.
Cloud Computing – Facing the Reality
As mentioned, in some embodiments, all of the functions may reside in a user device such as a portable camera or a smartphone. In other embodiments, the image may be captured by a user device with a suitable capture device, and transmitted over a network to another system that may provide, for example, an image processing service for analysis and pattern recognition. The image may first be manipulated to reduce noise or to convert multiple shades of gray to a simple combination of black and white.
A number of image processing techniques may be used such as pixel counting, thresholding, segmentation, inspecting an image for discrete groups of connected pixels as image landmarks, edge detection, and template matching. A system may use a combination of these techniques to perform an image recognition process. Various methods known to those skilled in the art may be used to implement forms of feature descriptors.
For example, occurrences of gradient orientation in localized portions of an image may be counted. Alternatively and optionally, edge detection algorithms may be used to identify points in an image at which the image brightness changes sharply or has discontinuities. In an embodiment, feature descriptors may be used such that image detection may be based on the appearance of the object at particular interest points, and may be invariant to image scale and rotation. The descriptors may also be resilient to changes in illumination, noise, and minor changes in viewpoint.
In addition, it may be desirable that feature descriptors are distinctive, easy to extract, allow for correct object identification with low probability of mismatch, and are easy to match against a database of feature descriptors. In some embodiments, object recognition may be performed real time or near real time. A combination of augmented reality and mobile computing technology may be used on mobile devices such as mobile phones.
Furthermore, because of the limited processing and available memory on such devices, it may be advantageous for the device to transmit one or more captured images via an accessible data network to a system available via the network. For example, a server may provide image analysis and recognition services for image data transmitted by the mobile device. The server may also access a database storing augmented reality data that may be transmitted to the mobile device.
Furthermore, the server, in addition to maintaining a database storing augmented reality data for transmission, may also maintain a database storing detailed cartography information for recognized scenes. Map databases may store precise location information about observed physical landmarks in various regions.
Such information may be maintained and transmitted to mobile devices so that they might then track their location against the provided map.
Computationally, it is typically costly to construct such maps dynamically i. Thus in various embodiments, mobile devices may be enabled to capture information about detected physical areas e. The locations may be maintained in a persistent map database and the map may be made available to other mobile devices that later enter the area such that the devices need not recalculate the locations of observed scenes. At a minimum, the devices may need only make evolutionary updates to the map. Shared map information may thus provide a plurality of services for augmented reality computing.
The mobile device may include a location determination function, such as GPS or cellular based location determination. In an embodiment, the location determination performed by the device may be transmitted to a server. The device's location may be determined hierarchically, for example beginning with a coarse location estimate and refining the initial estimate to arrive at a more precise estimate. In one embodiment, the server may perform refined location determination based on an analysis of the transmitted image. By taking into account the transmitted location, the server may narrow the search for a refined location.
For example, if the transmitted location estimate indicates that the device is near a downtown city area with a radius of meters, the server may focus further search inquiries to information within the estimated area. The server may include or access a database of image information and feature descriptors, and may perform database queries driven by location, tracking, and orientation data as determined from an analysis of the transmitted image information. For example, an analysis of an image of a landmark may result in the extraction of feature descriptors that may uniquely distinguish the landmark.
The server may perform a database query for similar feature descriptors. The returned query may indicate the identity of the landmark captured in the image.
Cloud Computing - Facing the Reality - Ashwini Kumar Rath
Furthermore, the server may determine that the image was captured at a particular orientation with respect to the landmark. Once the device location and orientation is determined, a number of useful features and services may be provided to the device. In one embodiment, targeted advertisements that may be relevant to the location and local environment may be downloaded to the device, whereupon the advertisements may be merged with the currently presented image and displayed on the device.
The data may be associated with feature descriptors that are associated with particular locations and businesses. It can be further appreciated that once a device's location and orientation or point of view is determined, any number of services may be provided related to the location and orientation. For example, real time or near real time queries may be generated or prompted upon direct input from the user.
In an embodiment, when a user clicks on a portion of a rendered image on the mobile device, the augmented reality system may interpret the user click as a request for additional information about the item or landmark represented by the selected portion of the rendered image. For example, the user may click on the portion of the image in which a particular business is rendered. Such navigable areas may be rendered similar to a web page on a browser. Rendering of the received information from the database may be performed through a variety of methods such as a 2D overlay, 3D augmented reality, playback of a particular sound, and the like.
It can be appreciated that in some applications of augmented reality computing may comprise the transmission of augmentation and cartography data that is associated not with a specific location but rather with the features of one or more observed objects. For example, a device may recognize a can of soda, which may not by itself be unique to any one specific location. In this example, the server may not associate the metadata with a location and the device may not request for position refinements from the server because the device may have already determined its position and may instead be leveraging the augmented reality system for information on dynamic scene elements.
In some embodiments, the image data captured by the device may be transmitted to the server for analysis and response. In other embodiments, the device may extract feature descriptors from captured images and transmit the extracted descriptors to the server. In addition to providing metadata as described in the above examples, context specific actions may also be delivered to a device. In one embodiment, a device may receive a request to provide the database with a particular piece of information when a particular landmark or location is determined to be in view.
For example, during the context of a shared game, the player's current health may be requested when triggered by a particular landmark that comes into view. The player health information may then be transmitted to other players cooperating in a shared gaming experience. In some embodiments, the database may comprise predetermined data such as feature descriptors and metadata associated with one or more landmarks. The predetermined data may be provided by the service provider.
Additionally and optionally, the data may be user defined and transmitted by users. For example, landmarks that are not represented by pre-populated feature descriptors in the database may be represented by images provided by users. The term landmark may comprise any recognizable feature in an image, such as a textured portion of any object. When a pattern fails to be recognized by the image recognition engines, it may be determined that the pattern represents a new landmark and the user transmitted image may be used to represent the new landmark.
In an embodiment, a user may decide that they desire to augment some space with content of their own choosing. For example, a user may enter an unknown area, collect information about the area such as feature descriptors, map data, and the like, and register the information in a database such that other users entering the area may then recognize the area and their place within the area.
Additionally and optionally, the user or an application may choose to associate their own augmentation metadata with the area e.
- Transforming Revivals.
- You are here.
- Cloud Computing: Facing the Reality - AbeBooks - Ashwini Rath: ;
- A Fog Computing and Cloudlet Based Augmented Reality System for the Industry Shipyard?
- 2.1. Terminology: Explaining AR/VR/MR, Cloud/Edge & Thin/Thick Client!
- J.T. Stones ERotica: The Very Wet Nurse;
Multiple users may associate different metadata with a single area and allow the data to be accessible to different subsets of users. For example, a user may anchor some specific virtual content representing a small statue in a tavern, which may then be made visible to the user's on-line video game group when they enter the tavern while the virtual content may not be seen by any other mobile users in other video game groups. In another example, another user may have augmented the tavern with animated dancing animals.
By enabling such augmentation and data sharing, the members of any type of gaming, social, or other type of group may share in the same set of common information about the tavern, its landmark descriptors, and their locations. At the same time, all users may not necessarily share in the same metadata associated with the venue. In an embodiment, metadata such as device location may be automatically and seamlessly transmitted by the user device to supplement to the newly added landmark.
Additionally and optionally, users may be prompted to provide additional information that is associated with the newly created entry. Furthermore, users may provide additional context sensitive metadata associated with a particular landmark. For example, a landmark may contain different sets of metadata that may be dependent upon the user's context a building may access different metadata when viewed within a particular game application, as compared to when viewed from a travel guide application.
In one exemplary embodiment illustrated in FIG. The captured image file may be transmitted via a network to system that may comprise one or more servers hosting at least one application that receives the transmitted image and analyzes the image to extract feature descriptors. Device may further include a location determination capability using GPS or other location determination means, and may transmit the location information along with the image data.
System may further have access to data store that may comprise a database of predetermined landmarks associated with a number of feature descriptors.
- Coins of the World: France;
- Healing The Heart!
- Mafia, S.A..
- A Fog Computing and Cloudlet Based Augmented Reality System for the Industry 4.0 Shipyard!
System may query the data store for a matching landmark based on the feature descriptors extracted from the image transmitted by device If a match is found, data store may further return metadata associated with a matched landmark.