US20170020485A1 - Image processing apparatus and method - Google Patents

Image processing apparatus and method Download PDF

Info

Publication number
US20170020485A1
US20170020485A1 US15/287,414 US201615287414A US2017020485A1 US 20170020485 A1 US20170020485 A1 US 20170020485A1 US 201615287414 A US201615287414 A US 201615287414A US 2017020485 A1 US2017020485 A1 US 2017020485A1
Authority
US
United States
Prior art keywords
rendered image
depth
image
volume data
data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US15/287,414
Inventor
Sung-Yun Kim
Jun-kyo LEE
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Medison Co Ltd
Original Assignee
Samsung Medison Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Medison Co Ltd filed Critical Samsung Medison Co Ltd
Priority to US15/287,414 priority Critical patent/US20170020485A1/en
Publication of US20170020485A1 publication Critical patent/US20170020485A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/08Detecting organic movements or changes, e.g. tumours, cysts, swellings
    • A61B8/0866Detecting organic movements or changes, e.g. tumours, cysts, swellings involving foetal diagnosis; pre-natal or peri-natal diagnosis of the baby
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/466Displaying means of special interest adapted to display 3D data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/48Diagnostic techniques
    • A61B8/483Diagnostic techniques involving the acquisition of a 3D volume of data
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5215Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data
    • A61B8/5223Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of medical diagnostic data for extracting a diagnostic or physiological parameter from medical diagnostic data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T15/003D [Three Dimensional] image rendering
    • G06T15/08Volume rendering
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T19/00Manipulating 3D models or images for computer graphics
    • G06T19/20Editing of 3D images, e.g. changing shapes or colours, aligning objects or positioning parts
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0012Biomedical image inspection
    • GPHYSICS
    • G16INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR SPECIFIC APPLICATION FIELDS
    • G16HHEALTHCARE INFORMATICS, i.e. INFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR THE HANDLING OR PROCESSING OF MEDICAL OR HEALTHCARE DATA
    • G16H50/00ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics
    • G16H50/30ICT specially adapted for medical diagnosis, medical simulation or medical data mining; ICT specially adapted for detecting, monitoring or modelling epidemics or pandemics for calculating health indices; for individual health risk assessment
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/461Displaying means of special interest
    • A61B8/463Displaying means of special interest characterised by displaying multiple images or images and diagnostic data on one display
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/46Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient
    • A61B8/467Ultrasonic, sonic or infrasonic diagnostic devices with special arrangements for interfacing with the operator or the patient characterised by special input means
    • AHUMAN NECESSITIES
    • A61MEDICAL OR VETERINARY SCIENCE; HYGIENE
    • A61BDIAGNOSIS; SURGERY; IDENTIFICATION
    • A61B8/00Diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/52Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves
    • A61B8/5207Devices using data or image processing specially adapted for diagnosis using ultrasonic, sonic or infrasonic waves involving processing of raw data to produce diagnostic data, e.g. for generating an image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2200/00Indexing scheme for image data processing or generation, in general
    • G06T2200/04Indexing scheme for image data processing or generation, in general involving 3D image data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10132Ultrasound image
    • G06T2207/101363D ultrasound image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2210/00Indexing scheme for image generation or computer graphics
    • G06T2210/41Medical
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2219/00Indexing scheme for manipulating 3D models or images for computer graphics
    • G06T2219/20Indexing scheme for editing of 3D models
    • G06T2219/2021Shape modification

Definitions

  • the present invention relates to an image processing apparatus and method.
  • An image processing apparatus may obtain 2D rendered images with three dimensional (3D) textures by rendering volume data for stereographic images.
  • the image processing apparatus may also process the volume data to enhance the image quality of rendered images or edit the rendered images.
  • an image processing apparatus and method is required to efficiently process the volume data.
  • the present invention provides an image processing apparatus and method for efficiently processing volume data.
  • an image processing apparatus comprising: a data obtaining unit for obtaining volume data that contains a target image; a depth-data obtaining unit for obtaining depth data that indicates a depth to the surface of the target image from an image plane; an image processing unit for processing the volume data based on the depth data into processed volume data, and obtaining a rendered image based on the processed volume data; and a display unit for displaying the rendered image.
  • the image processing apparatus may further comprises an input unit for receiving a edit request to edit the rendered image from a user, wherein the image processing unit obtains an edited rendered image based on the edit request and the depth-data, and the display unit displays the edited rendered image.
  • the image processing unit may divide the volume data into a target volume and a non-target volume on the border of the surface of the target image obtained based on the depth-data, and obtain processed volume data formed by removing the non-target volume from the volume data.
  • the image processing unit may obtain a mask volume that indicates the surface of the target image based on the depth-data, and mask the volume data with the mask volume to obtain the processed volume data.
  • the image processing unit may obtain an editing area within the rendered image based on the edit request, establish a deletion volume corresponding to the editing area in the processed volume data based on the depth-data, remove the deletion volume from the processed volume data to obtain an edited volume data, and obtain the edited rendered image based on the edited volume data.
  • a bottom depth of the deletion volume from the image plane may be equal to or deeper than the depth to the surface of the target image.
  • a top depth of the deletion volume from the image plane may be equal to or shallower than a minimum depth to the surface of the target image within the editing area, and the bottom depth of the deletion volume from the image plane may be equal to or shallower than a maximum depth to the surface of the target image within the editing area.
  • the edit request may further include user depth information that indicates the bottom depth of the deletion volume.
  • the input unit may receive a recover request to recover the edited rendered image
  • the image processing unit may recover the edited rendered image based on the recover request to obtain a recovered rendered image
  • the display unit may display the recovered rendered image
  • the image processing unit may set up the deletion volume or a part of the deletion volume from the edited volume data as a recover volume, obtain recovered volume data by recovering the recover volume in the edited volume data, and obtain the recovered rendered image based on the recovered volume data.
  • an image processing method comprising: obtaining volume data that contains a target image; obtaining depth data that indicates a depth to the surface of the target image from an image plane; processing the volume data based on the depth data into processed volume data, and obtaining a rendered image based on the processed volume data; and displaying the rendered image.
  • the image processing method may further comprises receiving an edit request to edit the rendered image from a user; obtaining an edited rendered image based on the edit request and the depth-data; and displaying the edited rendered image.
  • the obtaining of the processed volume data may further comprise dividing the volume data into a target volume and a non-target volume on the border of the surface of the target image obtained based on the depth-data, and removing the non-target volume from the volume data.
  • the obtaining of the edited rendered image may comprise obtaining an editing area within the rendered image based on the edit request, establishing a deletion volume corresponding to the editing area in the processed volume data based on the depth-data, removing the deletion volume from the processed volume data to obtain an edited volume data, and obtaining the edited rendered image based on the edited volume data.
  • a bottom depth of the deletion volume from the image plane may be equal to or deeper than the depth to the surface of the target image.
  • a top depth of the deletion volume from the image plane may be equal to or shallower than a minimum depth to the surface of the target image within the editing area, and the bottom depth of the deletion volume from the image plane may be equal to or shallower than a maximum depth to the surface of the target image within the editing area.
  • the edit request may further include user depth information that indicates the bottom depth of the deletion volume.
  • the image processing method may further comprises receiving a recover request to recover the edited rendered image, recovering the edited rendered image based on the recover request to obtain a recovered rendered image, and displaying the recovered rendered image.
  • the obtaining of the recovered rendered image may comprise setting up the deletion volume or a part of the deletion volume from the edited volume data as a recover volume, obtaining recovered volume data by recovering the recover volume in the edited volume data, and obtaining the recovered rendered image based on the recovered volume data.
  • a computer readable recording medium having embodied thereon programs that perform, when executed by a computer, the method as described above.
  • FIG. 1 is a block diagram of an image processing apparatus according to an embodiment of the present invention.
  • FIG. 2 depicts an example of volume data obtained by a data obtaining unit of FIG. 1 ;
  • FIG. 3 depicts an example of a method of obtaining depth data performed by a depth-data obtaining unit of FIG. 1 ;
  • FIG. 4 depicts an example of processed volume data obtained by an image processing unit of FIG. 1 ;
  • FIG. 5 depicts an example of a mask volume used by the image processing unit of FIG. 1 to obtain the processed volume data
  • FIG. 6 shows an example of a rendered image where a target is a fetus
  • FIG. 7 shows another example of an edited rendered image where the target is the fetus
  • FIG. 8 shows examples of the rendered image in which an editing area is set up and the edited rendered image
  • FIG. 9 depicts an example of the processed volume data in which a deletion volume is set up
  • FIG. 10 shows an enlargement of a part of FIG. 9 ;
  • FIG. 11 depicts an example of an edited volume data obtained by removing the deletion volume from the processed volume data
  • FIG. 12 depicts an example of the edited volume data for the edited rendered image
  • FIG. 13 depicts an example of recovered volume data for a recovered rendered image
  • FIG. 14 is a flowchart of an image processing method according to an embodiment of the present invention.
  • FIG. 15 shows an example of the edited rendered image obtained by performing an image processing method different from the embodiments of the present invention.
  • FIG. 1 is a block diagram of an image processing apparatus 100 according to an embodiment of the present invention.
  • the image processing apparatus 100 includes a control unit 110 and a display unit 120 .
  • the image processing apparatus 100 may further include an input unit 130 and a storage unit 140 .
  • the image processing apparatus 100 may be applied to a medical image device, such as, an ultrasound imaging device, a computed tomography (CT) device or a magnetic resonance imaging (MRI) device.
  • a medical image device such as, an ultrasound imaging device, a computed tomography (CT) device or a magnetic resonance imaging (MRI) device.
  • CT computed tomography
  • MRI magnetic resonance imaging
  • the image processing apparatus 100 may be incorporated in the medical imaging device.
  • the image processing apparatus 100 may be applied not only to medical imaging devices but also to various imaging devices that require volume data to be processed.
  • the control unit 110 may obtain and process data to create a display image to be displayed on the display unit 120 .
  • the display unit 120 may display the display image in real time according to control by the control unit 110 .
  • the input unit 130 may receive a user request from a user.
  • the control unit 110 may process the data based on the user request.
  • the input unit 130 or a part of the input unit 130 may be displayed on the display unit 120 .
  • the control unit 110 may include a data obtaining unit 112 , a depth-data obtaining unit 114 and an image processing unit 116 .
  • the data obtaining unit 112 obtains volume data that contains a target image.
  • the depth-data obtaining unit 114 obtains depth data that indicates a depth to the surface of the target image with respect to an image plane.
  • the image processing unit 116 obtains processed volume data by processing the volume data based on the depth data, and obtains a rendered image based on the processed volume data.
  • the display unit 120 displays the rendered image.
  • FIG. 2 depicts an example of the volume data obtained by the data obtaining unit 112 of FIG. 1 .
  • a volume data 300 obtained in the data obtaining unit 112 contains a target image 200 .
  • the volume data 300 may include a plurality of voxel values.
  • the target image 200 is a stereoscopic data of a target.
  • the target image 200 is only by way of an example the stereoscopic data of a fetus but is not limited thereto.
  • the target may be any animal body including a human body, or a part of an animal body.
  • the target may be a fetus or an organ of an animal body.
  • the data obtaining unit 112 may scan a three dimensional (3D) space having a target, and may obtain the volume data 300 that may image the scanned 3D space with a 3D effect.
  • the data obtaining unit 112 may receive scan information of the scanned target from an external scanning device, and may obtain the volume data 300 based on the scan information.
  • the data obtaining unit 112 may receive the volume data 300 from the external device.
  • the method of obtaining the volume data in the image processing apparatus 100 is not limited thereto, but the volume data may be obtained in different ways.
  • FIG. 3 depicts an example of a method of obtaining depth data performed by a depth-data obtaining unit 114 of FIG. 1 .
  • the depth-data obtaining unit 114 obtains the depth data that indicates surface depths DP1-DP5 between the surface of the target image 200 and an image plane IPL.
  • the image plane IPL may be a virtual plane on which a viewpoint image of the volume data 300 captured by a virtual camera is formed.
  • the viewpoint image formed on the image plane IPL may be displayed through the display unit 120 .
  • the depth-data obtaining unit 140 may set a position and an orientation of the image plane IPL with respect to the volume data 300 in different ways.
  • the position and the orientation of the image plane IPL may be altered based on the user request input through the input unit 130 .
  • the image plane IPL may include a plurality of pixels PX1-PX5, which are arranged in a matrix form.
  • FIG. 3 depicts, by way of an example, first to fifth pixels PX1-PX5 arranged in a line, but the number of the pixels to be included in the image plane IPL is not limited thereto.
  • the depth-data obtaining unit 114 may obtain a depth to the surface of the target image 200 for each of the plurality of pixels PX1-PX5. Accordingly, the depth data may include a plurality of depths DP1-DP5 for the plurality of pixels PX1-PX5 in the image plane IPL.
  • the depth-data obtaining unit 114 may obtain the depth data based on the amount of reflection of light incident to the volume data 300 from the image plane IPL. This is because the surface of the target image 200 in the volume data has more light reflection compared with parts other than the surface. Thus, the depth-data obtaining unit 114 may detect max points where the amount of reflected light is maximum in the volume data 300 , and consider the max points as surface points SP1-SP5 of the target image 200 . Each surface point SP1-SP5 may be a voxel. Furthermore, the depth-data obtaining unit 114 may obtain depth data based on the surface points SP1-SP5.
  • the depth-data obtaining unit 114 may obtain the depth-data of a target image 200 based on ray casting that is used for volume rendering.
  • a light ray RY1-RY5 may reach a virtual plane VPL passing through the volume data 300 from the image plane IPL.
  • a plurality of sample points (S1, S2, . . . , Sn, where n is an integer) are established for each of the plurality of rays RY1-RY5.
  • Each of the plurality of sample points S1, S2, . . . , Sn may be a voxel.
  • the sample points shown in FIG. 3 are merely illustrative, and the locations of the sample points, gaps between the sample points, the number of the sample points, etc. are not limited thereto.
  • the light reflection is obtained at each of the plurality of sample points S1, S2, . . . , Sn on one of the plurality of rays RY1-RY5, and thus a total sum of the reflections for the ray is obtained by summing all the reflections at the plurality of sample points on the ray.
  • a total sum TS(2) of the reflections at sample points S1, S2, . . . , Sn on a second ray RY2 may be obtained as given by equation (1):
  • Di is an intensity of light at an i th sample point Si of the second ray RY2
  • Tj is transparency at a j th sample point Sj of the second ray RY2.
  • the depth-data obtaining unit 114 may detect a max reflection among the plurality of sample points S1, S1, . . . , Sn of each of the plurality of rays RY1-RY5, and presume the max reflection to be a surface point SP1-SP5 of the target image 200 .
  • the depth-data obtaining unit 114 may detect the max reflection Tmax(2) among the sample points S1, S2, . . . , Sn of the second ray RY2 as in equation (2):
  • T ⁇ max ⁇ ( 2 ) max i ⁇ ⁇ Di ⁇ ⁇ ⁇ i - 1 j ⁇ ⁇ Tj ⁇ ⁇ ⁇ Equation ⁇ ⁇ 2 ⁇ >
  • the depth-data obtaining unit 114 may detect the max reflection Tmax(2) among the sample points S1, S1, . . . , Sn of the second ray RY2 as the second surface point SP2.
  • the depth-data obtaining unit 114 may presume the second surface point SP2 of the second ray RY2 to be in the surface of the target image 200 with respect to the second pixel PX2, and obtain the depth between the second surface point SP2 and the second pixel PX2 as a second surface depth DP2. The depth-data obtaining unit 114 may also obtain depth data for the remaining rays by obtaining their depths to the surface.
  • the image processing unit 116 will now be described.
  • the image processing unit 116 obtains processed volume data by processing the volume data 300 based on the depth data.
  • the image processing unit 116 may divide the volume data 300 into a target volume and a non-target volume on the border of the surface of the target image 200 obtained based on the depth data.
  • the image processing unit 116 may obtain the processed volume data with the non-target volume removed from the volume data 300 .
  • FIG. 4 depicts an example of the processed volume data obtained by the image processing unit 116 of FIG. 1 . It is assumed that the processed volume data 300 A shown in FIG. 4 is obtained by the image processing unit 116 of FIG. 1 by processing the volume data 300 of FIG. 3 based on the depth data.
  • the processed volume data 300 A may result from eliminating the non-target volume from the volume data 300 with respect to the surface of the target image 200 (OMS).
  • the non-target volume may have a depth with respect to the image plane IPL less than the depth to the surface of the target image 200 (OMS).
  • Removal of the non-target volume may refer to altering each value of the plurality of voxels included in the non-target volume from among the values of the plurality of voxels included in the volume data 300 to a reference value. Voxels whose values are altered to the reference value may be presented in a reference color, such as, in a black.
  • the image processing unit 116 of FIG. 1 may only remove the non-target volume from the volume data 300 while maintaining the surface of the target image 200 OMS and a volume deeper than the OMS.
  • the non-target volume is likely to be noise or an obstacle to which the depth is shallower than the depth to the surface of the target image 200 (OMS).
  • the image processing unit 116 of FIG. 1 may remove the noise or obstacle from the volume data 300 while preserving the surface of the target image 200 (OMS).
  • FIG. 5 depicts an example of a mask volume used by the image processing unit 116 of FIG. 1 to obtain the processed volume data.
  • the image processing unit 116 of FIG. 1 may obtain the mask volume that indicates the surface of the target image (OMS) based on the depth data.
  • a hatched part of the mask volume MV may indicate the target volume OBV while a non-hatched part indicates the non-target volume.
  • the image processing unit 116 of FIG. 1 may obtain the processed volume data by masking the volume data 300 with the mask volume MV.
  • Masking may be an image processing operation which involves matching the volume data 300 and the mask volume MV, and removing the non-target volume (non-hatched part) indicated by the mask volume MV from the volume data 300 to obtain the processed volume data 300 A.
  • the image processing unit 116 obtains a rendered image based on the processed volume data 300 A.
  • the rendered image may be a viewpoint image of the processed volume data 300 A, which is formed on the image plane IPL.
  • the image processing unit 116 may obtain the rendered image based on the ray casting. However, the image processing unit 116 may obtain the rendered image based not only on the ray casting but also on various volume rendering methods.
  • the display unit 120 of FIG. 1 displays the rendered image.
  • FIG. 6 shows an example of the rendered image where the target is a fetus.
  • the face of the target may be a region of interest.
  • the hand of the target may be a region of non-interest.
  • a method of editing the rendered image 400 by eliminating the region of non-interest from the rendered image 400 is required.
  • the input unit 130 may receive an edit request to edit the rendered image 400 from a user.
  • the image processing unit 116 may obtain an edited rendered image based on the edit request received through the input unit 130 and the depth data obtained by the depth-data obtaining unit 114 .
  • the display unit 120 may display the edited rendered image.
  • FIG. 7 shows another example of the edited rendered image where the target is the fetus. Assuming that the edited rendered image 500 of FIG. 7 is obtained from the rendered image 400 based on the edit request and the depth data.
  • the hand in the edited rendered image 500 which is the region of non-interest displayed in the rendered image 400 , is removed.
  • the edited rendered image 500 has the face of the target which was hidden by the hand of the target being viewed.
  • the image processing unit 116 may obtain an editing area based on the edit request, establish a deletion volume, which corresponds to the editing area, in the processed volume data based on the depth data, obtain an edited volume data having the deletion volume eliminated from the processed volume data, and obtain the edited rendered image based on the edited volume data.
  • FIG. 8 shows examples of the rendered image having an editing area set up, and the edited rendered image
  • the image processing unit 116 may set up the editing area EA in the rendered image 400 A based on the edit request received through the input unit 130 .
  • the display unit 120 may display the rendered image 400 A in which the editing area EA is set up.
  • the display unit 120 may also display the edited rendered image 500 A formed by editing the rendered image 400 A in the editing area EA.
  • the editing area EA is illustrated in a circle, but is not limited thereto.
  • the edit request may include editing area information that indicates the editing area EA.
  • the user may input the editing area information by directly selecting the editing area EA on the displayed rendered image 400 A.
  • the user may select the editing area EA on the rendered image 400 A through the input unit 130 that can be implemented with a mouse, a track ball, etc.
  • the user may input the editing area information by selecting a point on the rendered image 400 A displayed on the display unit 130 .
  • the image processing unit 116 may set up the editing area EA based on the point. For example, a circle centered on the point may be set up as the editing area EA.
  • the radius r of the editing area EA may be predetermined by default or adjusted by the user. Alternatively, an oval or a square with respect to the point may be set up as the editing area EA.
  • the image processing unit 116 may set up a deletion volume, which corresponds to the editing area EA established on the rendered image 400 A, within the processed volume data.
  • FIG. 9 depicts an example of the processed volume data in which the deletion volume is set up
  • FIG. 10 shows an enlargement of a part of FIG. 9
  • FIG. 11 depicts an example of edited volume data obtained by removing the deletion volume from the processed volume data.
  • the image processing unit 116 may set up the deletion volume DV, which corresponds to the editing area EA, within the processed volume data 300 B based on the editing area EA in the image plane IPL, and then obtain the edited volume data 300 C by removing the deletion volume DV from the processed volume data 300 B.
  • the editing area EA in the image plane IPL may include at least one pixel (EP1-EP5). Although the first to fifth pixels (EP1-EP5) are shown in a row in FIG. 10 , they are merely illustrative and the number of the at least one pixel included in the editing area EA is not limited.
  • the pixels (EP1-EP5) of FIG. 10 may be ones, which are included in the editing area EA, from among the pixels (PX1-PX5).
  • the depth to the bottom surface BS of the deletion volume DV from the first pixel EP1 is equal to the depth to the surface S(1) of the target image 200 . That is, the first deletion depth d(1) is 0. Furthermore, the depth to the bottom surface BS of the deletion volume DV from the third pixel EP3 is deeper than the depth to the surface S(3) of the target image 200 by the third deletion depth d(3).
  • the deletion depth d(p) may be set up in different ways.
  • the deletion depth d(p) may be set up constantly or differently for the plurality of pixels EP1-EP5 included in the editing area EA.
  • the deletion depth d(p) may have an automatically established value.
  • the depth to the upper surface (US) of the deletion volume from the image plane IPL may be equal to or shallower than the minimum depth to the surface Smin of the target image 200 within the editing area EA.
  • the depth to the bottom surface BS of the deletion volume DV from the image plane IPL may be equal to or shallower than the maximum depth to the surface Smax of the target image 200 within the editing area EA.
  • deletion depth d(p) for the corresponding pth pixel EPp may be obtained by the following equation:
  • p represents the position of the pth pixel PXp in the editing area EA
  • c represents a center point of the editing area EA
  • r represents the radius of the editing area EA.
  • the deletion depth d(p) may have a value that is adjusted by the user.
  • the edit request may further include user depth information that indicates the depth to the bottom surface BS of the deletion volume DV.
  • the edit request may further include user depth information that indicates the deletion depth d(p).
  • the image processing unit 116 may obtain the edited volume data 300 C, and obtain the edited rendered image (e.g., the rendered image 500 A of FIG. 8 ) based on the edited volume data 300 C.
  • the edited rendered image e.g., the rendered image 500 A of FIG. 8
  • the user may re-input an edit request to re-edit the edited rendered image.
  • the method of re-editing the edited rendered image may employ the method of editing the rendered image (e.g., 400 A).
  • any overlapping description will be omitted.
  • the aim of the edited rendered image is to have the region of non-interest removed from the rendered image, but in the editing process not only the region of non-interest but also the region of interest may be removed. In addition, in the editing process, the region of non-interest may be changed into the region of interest. Accordingly, a method of recovering a part or all of the removed region of non-interest from the edited rendered image is required.
  • the input unit 130 may receive a recover request for recovering the edited rendered image (e.g. the rendered image 500 A of FIG. 8 ).
  • the image processing unit 116 may obtain a recovered rendered image based on the recover request.
  • the display unit 120 may display the recovered rendered image.
  • FIG. 12 depicts an example of the edited volume data for the edited rendered image
  • FIG. 13 depicts an example of recovered volume data for a recovered rendered image.
  • the image processing unit 116 may set up a recover volume RV in the edited volume data 300 D.
  • the recover volume RV may be the deletion volume DV or a part of the deletion volume DV.
  • the bottom surface of the recover volume RV may be consistent with the bottom surface of the deletion volume DV.
  • the image processing unit 116 may obtain recovered volume data 300 E by recovering the recover volume RV in the edited volume data 300 D.
  • the storage unit 140 may store volume data, processed volume data, edited volume data, recovered volume data, etc., which are processed by the control unit 110 .
  • the image processing unit 116 may recover the recover volume RV based on the processed volume data stored in the storage unit 140 .
  • FIG. 14 is a flowchart of the image processing method according to an embodiment of the present invention.
  • volume data containing a target image is obtained in operation S 110 .
  • Depth data indicating a depth to the surface of the target image from an image plane IPL is obtained, in operation S 120 .
  • Processed volume data is obtained by processing the volume data based on the depth data, and a rendered image is obtained based on the processed volume data, in operation S 130 .
  • the rendered image is displayed in operation S 140 .
  • the image processing method shown in FIG. 14 may be performed by the image processing apparatus shown in FIG. 1 .
  • Each step of the image processing method employs the steps described in connection with FIGS. 1 to 13 .
  • any overlapping description will be omitted.
  • FIG. 15 shows an example of the edited rendered image obtained by performing different image processing method from the embodiments of the present invention.
  • a specific area may be removed from the volume data using a magic-cut functionality, and the rendered image may be obtained based on the volume data having the specific area removed. Meanwhile, in FIG. 15 , the edited rendered image shows discontinuity between the removed area and its surroundings, thus appearing unnatural and artificial. In addition, even an area whose removal was not desired may be removed from the volume data.
  • the edited rendered image 500 A shows continuity between the editing area EA and its surroundings, thus appearing natural and non-artificial.
  • the image processing apparatus and method may be provided to efficiently process the volume data.
  • the non-target volume may selectively be removed from the volume data.
  • the non-target volume is likely to be noise or an obstacle to which the depth is shallower than the depth to the surface of the target image.
  • the quality of the rendered image may be enhanced because the noise or obstacle could be removed from the volume data while preserving the surface of the target image.
  • the edited rendered image may be obtained in which a region of interest is disclosed while removing the region of non-interest that hides the region of interest.
  • the edited rendered image may then be obtained.
  • the foregoing method may be written as computer programs and may be implemented in general-use digital computers that execute the programs using a computer readable recording medium.
  • the data structure used in the method can be recorded on the computer readable recording medium by means of various means.
  • Examples of the computer readable recording medium include magnetic storage media (e.g., read only memory (ROM), random access memory (RAM), universal serial bus (USB), floppy disk, hard disk, etc.), and optical recording media (e.g., CD-ROM, or DVD).

Abstract

An image processing apparatus and method. The image processing method includes a data obtaining unit for obtaining volume data that contains a target image; a depth-data obtaining unit for obtaining depth data that indicates a depth to the surface of the target image from an image plane; an image processing unit for processing the volume data into a processed volume data based on the depth-data, and obtaining a rendered image based on the processed volume data; and a display unit for displaying the rendered image.

Description

    CROSS-REFERENCE TO RELATED PATENT APPLICATION
  • This application is a U.S. Continuation application of U.S. patent application Ser. No. 15/001,133, filed on Jan. 19, 2016, which is a Continuation application of U.S. patent application Ser. No. 13/789,628, filed on Mar. 7, 2013 and claims the benefit of Korean Patent Application No. 10-2012-0023619, filed on Mar. 7, 2012, in the Korean Intellectual Property Office, the disclosures of which are incorporated herein in their entireties by reference.
  • BACKGROUND OF THE INVENTION
  • 1. Field of the Invention
  • The present invention relates to an image processing apparatus and method.
  • 2. Description of the Related Art
  • An image processing apparatus may obtain 2D rendered images with three dimensional (3D) textures by rendering volume data for stereographic images. The image processing apparatus may also process the volume data to enhance the image quality of rendered images or edit the rendered images.
  • Accordingly, an image processing apparatus and method is required to efficiently process the volume data.
  • SUMMARY OF THE INVENTION
  • The present invention provides an image processing apparatus and method for efficiently processing volume data.
  • According to an aspect of the present invention, there is provided an image processing apparatus comprising: a data obtaining unit for obtaining volume data that contains a target image; a depth-data obtaining unit for obtaining depth data that indicates a depth to the surface of the target image from an image plane; an image processing unit for processing the volume data based on the depth data into processed volume data, and obtaining a rendered image based on the processed volume data; and a display unit for displaying the rendered image.
  • The image processing apparatus may further comprises an input unit for receiving a edit request to edit the rendered image from a user, wherein the image processing unit obtains an edited rendered image based on the edit request and the depth-data, and the display unit displays the edited rendered image.
  • The image processing unit may divide the volume data into a target volume and a non-target volume on the border of the surface of the target image obtained based on the depth-data, and obtain processed volume data formed by removing the non-target volume from the volume data.
  • The image processing unit may obtain a mask volume that indicates the surface of the target image based on the depth-data, and mask the volume data with the mask volume to obtain the processed volume data.
  • The image processing unit may obtain an editing area within the rendered image based on the edit request, establish a deletion volume corresponding to the editing area in the processed volume data based on the depth-data, remove the deletion volume from the processed volume data to obtain an edited volume data, and obtain the edited rendered image based on the edited volume data.
  • A bottom depth of the deletion volume from the image plane may be equal to or deeper than the depth to the surface of the target image.
  • A top depth of the deletion volume from the image plane may be equal to or shallower than a minimum depth to the surface of the target image within the editing area, and the bottom depth of the deletion volume from the image plane may be equal to or shallower than a maximum depth to the surface of the target image within the editing area.
  • The edit request may further include user depth information that indicates the bottom depth of the deletion volume.
  • The input unit may receive a recover request to recover the edited rendered image, the image processing unit may recover the edited rendered image based on the recover request to obtain a recovered rendered image, and the display unit may display the recovered rendered image.
  • The image processing unit may set up the deletion volume or a part of the deletion volume from the edited volume data as a recover volume, obtain recovered volume data by recovering the recover volume in the edited volume data, and obtain the recovered rendered image based on the recovered volume data.
  • According to another aspect of the present invention, there is provided an image processing method comprising: obtaining volume data that contains a target image; obtaining depth data that indicates a depth to the surface of the target image from an image plane; processing the volume data based on the depth data into processed volume data, and obtaining a rendered image based on the processed volume data; and displaying the rendered image.
  • The image processing method may further comprises receiving an edit request to edit the rendered image from a user; obtaining an edited rendered image based on the edit request and the depth-data; and displaying the edited rendered image.
  • The obtaining of the processed volume data may further comprise dividing the volume data into a target volume and a non-target volume on the border of the surface of the target image obtained based on the depth-data, and removing the non-target volume from the volume data.
  • The obtaining of the edited rendered image may comprise obtaining an editing area within the rendered image based on the edit request, establishing a deletion volume corresponding to the editing area in the processed volume data based on the depth-data, removing the deletion volume from the processed volume data to obtain an edited volume data, and obtaining the edited rendered image based on the edited volume data.
  • A bottom depth of the deletion volume from the image plane may be equal to or deeper than the depth to the surface of the target image.
  • A top depth of the deletion volume from the image plane may be equal to or shallower than a minimum depth to the surface of the target image within the editing area, and the bottom depth of the deletion volume from the image plane may be equal to or shallower than a maximum depth to the surface of the target image within the editing area.
  • The edit request may further include user depth information that indicates the bottom depth of the deletion volume.
  • The image processing method may further comprises receiving a recover request to recover the edited rendered image, recovering the edited rendered image based on the recover request to obtain a recovered rendered image, and displaying the recovered rendered image.
  • The obtaining of the recovered rendered image may comprise setting up the deletion volume or a part of the deletion volume from the edited volume data as a recover volume, obtaining recovered volume data by recovering the recover volume in the edited volume data, and obtaining the recovered rendered image based on the recovered volume data.
  • According to another aspect of the present invention, there is provided a computer readable recording medium having embodied thereon programs that perform, when executed by a computer, the method as described above.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • The above and other features and advantages of the present invention will become more apparent by describing in detail exemplary embodiments thereof with reference to the attached drawings in which:
  • FIG. 1 is a block diagram of an image processing apparatus according to an embodiment of the present invention;
  • FIG. 2 depicts an example of volume data obtained by a data obtaining unit of FIG. 1;
  • FIG. 3 depicts an example of a method of obtaining depth data performed by a depth-data obtaining unit of FIG. 1;
  • FIG. 4 depicts an example of processed volume data obtained by an image processing unit of FIG. 1;
  • FIG. 5 depicts an example of a mask volume used by the image processing unit of FIG. 1 to obtain the processed volume data;
  • FIG. 6 shows an example of a rendered image where a target is a fetus;
  • FIG. 7 shows another example of an edited rendered image where the target is the fetus;
  • FIG. 8 shows examples of the rendered image in which an editing area is set up and the edited rendered image;
  • FIG. 9 depicts an example of the processed volume data in which a deletion volume is set up;
  • FIG. 10 shows an enlargement of a part of FIG. 9;
  • FIG. 11 depicts an example of an edited volume data obtained by removing the deletion volume from the processed volume data;
  • FIG. 12 depicts an example of the edited volume data for the edited rendered image;
  • FIG. 13 depicts an example of recovered volume data for a recovered rendered image;
  • FIG. 14 is a flowchart of an image processing method according to an embodiment of the present invention; and
  • FIG. 15 shows an example of the edited rendered image obtained by performing an image processing method different from the embodiments of the present invention.
  • DETAILED DESCRIPTION OF THE INVENTION
  • Reference will now be made in detail to embodiments, examples of which are illustrated in the accompanying drawings, wherein like reference numerals refer to like elements throughout. In this regard, the present embodiments may have different forms and should not be construed as being limited to the descriptions set forth herein. Accordingly, the embodiments are merely described below, by referring to the figures, to explain aspects of the present description.
  • FIG. 1 is a block diagram of an image processing apparatus 100 according to an embodiment of the present invention.
  • Referring to FIG. 1, the image processing apparatus 100 includes a control unit 110 and a display unit 120. The image processing apparatus 100 may further include an input unit 130 and a storage unit 140.
  • The image processing apparatus 100 may be applied to a medical image device, such as, an ultrasound imaging device, a computed tomography (CT) device or a magnetic resonance imaging (MRI) device. For example, the image processing apparatus 100 may be incorporated in the medical imaging device. The image processing apparatus 100 may be applied not only to medical imaging devices but also to various imaging devices that require volume data to be processed.
  • The control unit 110 may obtain and process data to create a display image to be displayed on the display unit 120. The display unit 120 may display the display image in real time according to control by the control unit 110. The input unit 130 may receive a user request from a user. The control unit 110 may process the data based on the user request. The input unit 130 or a part of the input unit 130 may be displayed on the display unit 120.
  • The control unit 110 may include a data obtaining unit 112, a depth-data obtaining unit 114 and an image processing unit 116. The data obtaining unit 112 obtains volume data that contains a target image. The depth-data obtaining unit 114 obtains depth data that indicates a depth to the surface of the target image with respect to an image plane. The image processing unit 116 obtains processed volume data by processing the volume data based on the depth data, and obtains a rendered image based on the processed volume data. The display unit 120 displays the rendered image.
  • FIG. 2 depicts an example of the volume data obtained by the data obtaining unit 112 of FIG. 1.
  • Referring to FIGS. 1 and 2, a volume data 300 obtained in the data obtaining unit 112 contains a target image 200. The volume data 300 may include a plurality of voxel values. The target image 200 is a stereoscopic data of a target. In FIG. 2, the target image 200 is only by way of an example the stereoscopic data of a fetus but is not limited thereto. The target may be any animal body including a human body, or a part of an animal body. For example, the target may be a fetus or an organ of an animal body.
  • As an example, the data obtaining unit 112 may scan a three dimensional (3D) space having a target, and may obtain the volume data 300 that may image the scanned 3D space with a 3D effect. As another example, the data obtaining unit 112 may receive scan information of the scanned target from an external scanning device, and may obtain the volume data 300 based on the scan information. As a further example, the data obtaining unit 112 may receive the volume data 300 from the external device. However, the method of obtaining the volume data in the image processing apparatus 100 is not limited thereto, but the volume data may be obtained in different ways.
  • FIG. 3 depicts an example of a method of obtaining depth data performed by a depth-data obtaining unit 114 of FIG. 1.
  • Referring to FIGS. 1 and 3, the depth-data obtaining unit 114 obtains the depth data that indicates surface depths DP1-DP5 between the surface of the target image 200 and an image plane IPL.
  • The image plane IPL may be a virtual plane on which a viewpoint image of the volume data 300 captured by a virtual camera is formed. The viewpoint image formed on the image plane IPL may be displayed through the display unit 120.
  • The depth-data obtaining unit 140 may set a position and an orientation of the image plane IPL with respect to the volume data 300 in different ways. The position and the orientation of the image plane IPL may be altered based on the user request input through the input unit 130.
  • The image plane IPL may include a plurality of pixels PX1-PX5, which are arranged in a matrix form. FIG. 3 depicts, by way of an example, first to fifth pixels PX1-PX5 arranged in a line, but the number of the pixels to be included in the image plane IPL is not limited thereto.
  • The depth-data obtaining unit 114 may obtain a depth to the surface of the target image 200 for each of the plurality of pixels PX1-PX5. Accordingly, the depth data may include a plurality of depths DP1-DP5 for the plurality of pixels PX1-PX5 in the image plane IPL.
  • The depth-data obtaining unit 114 may obtain the depth data based on the amount of reflection of light incident to the volume data 300 from the image plane IPL. This is because the surface of the target image 200 in the volume data has more light reflection compared with parts other than the surface. Thus, the depth-data obtaining unit 114 may detect max points where the amount of reflected light is maximum in the volume data 300, and consider the max points as surface points SP1-SP5 of the target image 200. Each surface point SP1-SP5 may be a voxel. Furthermore, the depth-data obtaining unit 114 may obtain depth data based on the surface points SP1-SP5.
  • For example, the depth-data obtaining unit 114 may obtain the depth-data of a target image 200 based on ray casting that is used for volume rendering.
  • At each of the plurality of pixels PX1-PX5 in the image plane IPL, a light ray RY1-RY5 may reach a virtual plane VPL passing through the volume data 300 from the image plane IPL.
  • A plurality of sample points (S1, S2, . . . , Sn, where n is an integer) are established for each of the plurality of rays RY1-RY5. Each of the plurality of sample points S1, S2, . . . , Sn may be a voxel. The sample points shown in FIG. 3 are merely illustrative, and the locations of the sample points, gaps between the sample points, the number of the sample points, etc. are not limited thereto.
  • With ray casting, the light reflection is obtained at each of the plurality of sample points S1, S2, . . . , Sn on one of the plurality of rays RY1-RY5, and thus a total sum of the reflections for the ray is obtained by summing all the reflections at the plurality of sample points on the ray. For example, a total sum TS(2) of the reflections at sample points S1, S2, . . . , Sn on a second ray RY2 may be obtained as given by equation (1):
  • TS ( 2 ) = i n Di Π i - 1 j Tj < Equation 1 >
  • where Di is an intensity of light at an ith sample point Si of the second ray RY2, and Tj is transparency at a jth sample point Sj of the second ray RY2.
  • The depth-data obtaining unit 114 may detect a max reflection among the plurality of sample points S1, S1, . . . , Sn of each of the plurality of rays RY1-RY5, and presume the max reflection to be a surface point SP1-SP5 of the target image 200. For example, the depth-data obtaining unit 114 may detect the max reflection Tmax(2) among the sample points S1, S2, . . . , Sn of the second ray RY2 as in equation (2):
  • T max ( 2 ) = max i { Di Π i - 1 j Tj } < Equation 2 >
  • The depth-data obtaining unit 114 may detect the max reflection Tmax(2) among the sample points S1, S1, . . . , Sn of the second ray RY2 as the second surface point SP2.
  • The depth-data obtaining unit 114 may presume the second surface point SP2 of the second ray RY2 to be in the surface of the target image 200 with respect to the second pixel PX2, and obtain the depth between the second surface point SP2 and the second pixel PX2 as a second surface depth DP2. The depth-data obtaining unit 114 may also obtain depth data for the remaining rays by obtaining their depths to the surface.
  • The image processing unit 116 will now be described.
  • The image processing unit 116 obtains processed volume data by processing the volume data 300 based on the depth data. The image processing unit 116 may divide the volume data 300 into a target volume and a non-target volume on the border of the surface of the target image 200 obtained based on the depth data. The image processing unit 116 may obtain the processed volume data with the non-target volume removed from the volume data 300.
  • FIG. 4 depicts an example of the processed volume data obtained by the image processing unit 116 of FIG. 1. It is assumed that the processed volume data 300A shown in FIG. 4 is obtained by the image processing unit 116 of FIG. 1 by processing the volume data 300 of FIG. 3 based on the depth data.
  • Referring to FIGS. 3 and 4, the processed volume data 300A may result from eliminating the non-target volume from the volume data 300 with respect to the surface of the target image 200 (OMS). The non-target volume may have a depth with respect to the image plane IPL less than the depth to the surface of the target image 200 (OMS). Removal of the non-target volume may refer to altering each value of the plurality of voxels included in the non-target volume from among the values of the plurality of voxels included in the volume data 300 to a reference value. Voxels whose values are altered to the reference value may be presented in a reference color, such as, in a black.
  • As such, the image processing unit 116 of FIG. 1 may only remove the non-target volume from the volume data 300 while maintaining the surface of the target image 200 OMS and a volume deeper than the OMS. The non-target volume is likely to be noise or an obstacle to which the depth is shallower than the depth to the surface of the target image 200 (OMS). Thus, the image processing unit 116 of FIG. 1 may remove the noise or obstacle from the volume data 300 while preserving the surface of the target image 200 (OMS).
  • FIG. 5 depicts an example of a mask volume used by the image processing unit 116 of FIG. 1 to obtain the processed volume data.
  • Referring to FIG. 5, the image processing unit 116 of FIG. 1 may obtain the mask volume that indicates the surface of the target image (OMS) based on the depth data. A hatched part of the mask volume MV may indicate the target volume OBV while a non-hatched part indicates the non-target volume.
  • Turning back to FIGS. 3 to 5, the image processing unit 116 of FIG. 1 may obtain the processed volume data by masking the volume data 300 with the mask volume MV. Masking may be an image processing operation which involves matching the volume data 300 and the mask volume MV, and removing the non-target volume (non-hatched part) indicated by the mask volume MV from the volume data 300 to obtain the processed volume data 300A.
  • The image processing unit 116 obtains a rendered image based on the processed volume data 300A. The rendered image may be a viewpoint image of the processed volume data 300A, which is formed on the image plane IPL. The image processing unit 116 may obtain the rendered image based on the ray casting. However, the image processing unit 116 may obtain the rendered image based not only on the ray casting but also on various volume rendering methods. The display unit 120 of FIG. 1 displays the rendered image.
  • FIG. 6 shows an example of the rendered image where the target is a fetus.
  • Referring to FIGS. 1 and 6, parts of the face of the target appears hidden by the hand of the target in the rendering image 400. In connection with FIG. 3, this is because the depth to the hand of the target image 200 from the image plane IPL is shallower than the depth to the face of the target image 200.
  • In this regard, only the face of the target may be a region of interest. The hand of the target may be a region of non-interest. Thus, when the rendered image 400 includes the region of non-interest, a method of editing the rendered image 400 by eliminating the region of non-interest from the rendered image 400 is required.
  • The input unit 130 may receive an edit request to edit the rendered image 400 from a user. The image processing unit 116 may obtain an edited rendered image based on the edit request received through the input unit 130 and the depth data obtained by the depth-data obtaining unit 114. The display unit 120 may display the edited rendered image.
  • FIG. 7 shows another example of the edited rendered image where the target is the fetus. Assuming that the edited rendered image 500 of FIG. 7 is obtained from the rendered image 400 based on the edit request and the depth data.
  • Referring to FIGS. 6 and 7, the hand in the edited rendered image 500, which is the region of non-interest displayed in the rendered image 400, is removed. Again, the edited rendered image 500 has the face of the target which was hidden by the hand of the target being viewed.
  • Next, an example of a method of obtaining the edited rendered image 500 from the rendered image 400 in the image processing unit 116 will be described.
  • The image processing unit 116 may obtain an editing area based on the edit request, establish a deletion volume, which corresponds to the editing area, in the processed volume data based on the depth data, obtain an edited volume data having the deletion volume eliminated from the processed volume data, and obtain the edited rendered image based on the edited volume data.
  • FIG. 8 shows examples of the rendered image having an editing area set up, and the edited rendered image;
  • Referring to FIGS. 1 and 8, the image processing unit 116 may set up the editing area EA in the rendered image 400A based on the edit request received through the input unit 130. The display unit 120 may display the rendered image 400A in which the editing area EA is set up. The display unit 120 may also display the edited rendered image 500A formed by editing the rendered image 400A in the editing area EA. The editing area EA is illustrated in a circle, but is not limited thereto.
  • The edit request may include editing area information that indicates the editing area EA.
  • As an example, the user may input the editing area information by directly selecting the editing area EA on the displayed rendered image 400A. For example, the user may select the editing area EA on the rendered image 400A through the input unit 130 that can be implemented with a mouse, a track ball, etc.
  • As another example, the user may input the editing area information by selecting a point on the rendered image 400A displayed on the display unit 130. The image processing unit 116 may set up the editing area EA based on the point. For example, a circle centered on the point may be set up as the editing area EA. The radius r of the editing area EA may be predetermined by default or adjusted by the user. Alternatively, an oval or a square with respect to the point may be set up as the editing area EA.
  • The image processing unit 116 may set up a deletion volume, which corresponds to the editing area EA established on the rendered image 400A, within the processed volume data.
  • FIG. 9 depicts an example of the processed volume data in which the deletion volume is set up, FIG. 10 shows an enlargement of a part of FIG. 9, and FIG. 11 depicts an example of edited volume data obtained by removing the deletion volume from the processed volume data.
  • Referring to FIGS. 1 and 9 to 11, the image processing unit 116 may set up the deletion volume DV, which corresponds to the editing area EA, within the processed volume data 300B based on the editing area EA in the image plane IPL, and then obtain the edited volume data 300C by removing the deletion volume DV from the processed volume data 300B.
  • The editing area EA in the image plane IPL may include at least one pixel (EP1-EP5). Although the first to fifth pixels (EP1-EP5) are shown in a row in FIG. 10, they are merely illustrative and the number of the at least one pixel included in the editing area EA is not limited. The pixels (EP1-EP5) of FIG. 10 may be ones, which are included in the editing area EA, from among the pixels (PX1-PX5).
  • A depth from the image plane IPL to a bottom surface BS of the deletion volume DV may be equal to or deeper than the depth to the corresponding surface [S(p), p=1, 2, . . . , 5] of the target image 200. That is, the depth from the image plane IPL to the bottom surface BS of the deletion volume DV may be deeper than the depth to the corresponding surface [S(p), p=1, 2, . . . , 5] of the target image 200 by a corresponding deletion depth [d(p), p=1, 2, . . . , 5].
  • In FIG. 10, the depth to the bottom surface BS of the deletion volume DV from the first pixel EP1 is equal to the depth to the surface S(1) of the target image 200. That is, the first deletion depth d(1) is 0. Furthermore, the depth to the bottom surface BS of the deletion volume DV from the third pixel EP3 is deeper than the depth to the surface S(3) of the target image 200 by the third deletion depth d(3).
  • The deletion depth d(p) may be set up in different ways. The deletion depth d(p) may be set up constantly or differently for the plurality of pixels EP1-EP5 included in the editing area EA.
  • The deletion depth d(p) may have an automatically established value. The deletion depth d(p) for the pth pixel EPp (p=1, 2, . . . , 5), one of the at least one pixels EP1-EP5 in the editing area EA, may be set up based on at least one of the following: the position of the pth pixel EPp within the editing area EA, the depth to the surface S(p) of the target image 200 for the pth pixel EPp, and the maximum depth to the surface Smax of the target image 200 within the editing area EA.
  • The shallower the depth to the surface S(p) of the target image 200 for the pth pixel EPp is, the thicker the deletion depth d(p) becomes. Furthermore, the deeper the depth to the surface S(p) of the target image 200 for the pth pixel EPp is, the thinner the deletion depth d(p) becomes.
  • In addition, the depth to the upper surface (US) of the deletion volume from the image plane IPL may be equal to or shallower than the minimum depth to the surface Smin of the target image 200 within the editing area EA. Furthermore, the depth to the bottom surface BS of the deletion volume DV from the image plane IPL may be equal to or shallower than the maximum depth to the surface Smax of the target image 200 within the editing area EA.
  • For example, the deletion depth d(p) for the corresponding pth pixel EPp may be obtained by the following equation:
  • d ( p ) = - p - c 2 2 r 2 ( S max - S ( p ) ) < Equation 3 >
  • where p represents the position of the pth pixel PXp in the editing area EA, c represents a center point of the editing area EA, and r represents the radius of the editing area EA.
  • Alternatively, the deletion depth d(p) may have a value that is adjusted by the user. To adjust the deletion depth d(p), the edit request may further include user depth information that indicates the depth to the bottom surface BS of the deletion volume DV. The edit request may further include user depth information that indicates the deletion depth d(p).
  • As such, the image processing unit 116 may obtain the edited volume data 300C, and obtain the edited rendered image (e.g., the rendered image 500A of FIG. 8) based on the edited volume data 300C.
  • When the edited rendered image also includes a region of non-interest, the user may re-input an edit request to re-edit the edited rendered image. The method of re-editing the edited rendered image may employ the method of editing the rendered image (e.g., 400A). Herein, any overlapping description will be omitted.
  • The aim of the edited rendered image is to have the region of non-interest removed from the rendered image, but in the editing process not only the region of non-interest but also the region of interest may be removed. In addition, in the editing process, the region of non-interest may be changed into the region of interest. Accordingly, a method of recovering a part or all of the removed region of non-interest from the edited rendered image is required.
  • Returning to FIG. 1, the input unit 130 may receive a recover request for recovering the edited rendered image (e.g. the rendered image 500A of FIG. 8). The image processing unit 116 may obtain a recovered rendered image based on the recover request. The display unit 120 may display the recovered rendered image.
  • FIG. 12 depicts an example of the edited volume data for the edited rendered image, and FIG. 13 depicts an example of recovered volume data for a recovered rendered image.
  • Referring to FIGS. 1, 12 and 13, the image processing unit 116 may set up a recover volume RV in the edited volume data 300D. The recover volume RV may be the deletion volume DV or a part of the deletion volume DV. The bottom surface of the recover volume RV may be consistent with the bottom surface of the deletion volume DV.
  • The image processing unit 116 may obtain recovered volume data 300E by recovering the recover volume RV in the edited volume data 300D.
  • The storage unit 140 may store volume data, processed volume data, edited volume data, recovered volume data, etc., which are processed by the control unit 110. The image processing unit 116 may recover the recover volume RV based on the processed volume data stored in the storage unit 140.
  • FIG. 14 is a flowchart of the image processing method according to an embodiment of the present invention.
  • Referring to FIG. 14, initially, volume data containing a target image is obtained in operation S110. Depth data indicating a depth to the surface of the target image from an image plane IPL is obtained, in operation S120. Processed volume data is obtained by processing the volume data based on the depth data, and a rendered image is obtained based on the processed volume data, in operation S130. The rendered image is displayed in operation S140.
  • The image processing method shown in FIG. 14 may be performed by the image processing apparatus shown in FIG. 1. Each step of the image processing method employs the steps described in connection with FIGS. 1 to 13. Herein, any overlapping description will be omitted.
  • FIG. 15 shows an example of the edited rendered image obtained by performing different image processing method from the embodiments of the present invention.
  • Referring to FIG. 15, a specific area may be removed from the volume data using a magic-cut functionality, and the rendered image may be obtained based on the volume data having the specific area removed. Meanwhile, in FIG. 15, the edited rendered image shows discontinuity between the removed area and its surroundings, thus appearing unnatural and artificial. In addition, even an area whose removal was not desired may be removed from the volume data.
  • On the other hand, referring to FIG. 8, the edited rendered image 500A according to an embodiment of the present invention shows continuity between the editing area EA and its surroundings, thus appearing natural and non-artificial.
  • As such, according to the embodiments of the present invention, the image processing apparatus and method may be provided to efficiently process the volume data.
  • According to the embodiments of the present invention, the non-target volume may selectively be removed from the volume data. The non-target volume is likely to be noise or an obstacle to which the depth is shallower than the depth to the surface of the target image. Thus, the quality of the rendered image may be enhanced because the noise or obstacle could be removed from the volume data while preserving the surface of the target image.
  • Furthermore, according to the embodiment of the present invention, by processing the volume data based on the depth data, the edited rendered image may be obtained in which a region of interest is disclosed while removing the region of non-interest that hides the region of interest. In addition, once the user sets up the editing area through the edit request, the edited rendered image may then be obtained. In conclusion, a more intuitive and convenient-to-use method of editing a rendered image may be provided for the user.
  • The foregoing method may be written as computer programs and may be implemented in general-use digital computers that execute the programs using a computer readable recording medium. The data structure used in the method can be recorded on the computer readable recording medium by means of various means. Examples of the computer readable recording medium include magnetic storage media (e.g., read only memory (ROM), random access memory (RAM), universal serial bus (USB), floppy disk, hard disk, etc.), and optical recording media (e.g., CD-ROM, or DVD).
  • While the present invention has been particularly shown and described with reference to exemplary embodiments thereof, it will be understood by those of ordinary skill in the art that various changes in form and details may be made therein without departing from the spirit and scope of the present invention as defined by the following claims.

Claims (20)

What is claimed is:
1. An ultrasound imaging device comprising:
a processor configured to generate a first rendered image from a volume data obtained by scanning an object; and
a display configured to display the first rendered image,
wherein based on an edit request, the processor generates a second rendered image by rendering the volume data in a first region to a different depth from a depth of a surface in the first region in the first rendered image to reveal a hidden region of the object hidden by an obstacle,
the display displays the second rendered image, and
the hidden region comprises at least a part of a face region of a fetus.
2. The device of claim 1, further comprising a data obtaining device configured to obtain the volume data by scanning the object.
3. The device of claim 1, wherein the processor is further configured to indicate at least one surface of the object in the first rendered image based on an amount of reflection of signal included in the volume data.
4. The device of claim 1, further comprising an input device configured to receive the edit request with respect to the first rendered image.
5. The device of claim 1, wherein the processor is further configured to generate the second rendered image by removing the obstacle from the first rendered image.
6. The device of claim 1, wherein the processor is further configured to represent voxels included in a non-target volume as a reference color in the first rendered image and the second rendered image.
7. The device of claim 1, further comprising an input device configured to receive a recover request for recovering the second rendered image,
wherein the display is further configured to display a recovered rendered image in which a edited part while generating the second rendered image from the first rendered image is recovered from the second rendered image based on the recover request.
8. The device of claim 1, wherein the processor is further configured to obtain, based on the volume data, a depth data which indicates a depth to the surface of the object with respect to an image plane, and generate the first rendered image and the second rendered image based on the depth data.
9. The device of claim 1, wherein the processor is further configure to obtain the depth data based on an amount of reflection of signal included in the volume data.
10. The device of claim 1, wherein the processor is further configured to generate a second rendered image which represents the volume data on deeper depth than a represented depth in the first rendered image with respect to a first region to reveal a hidden region of the object by an obstacle, and represents the volume data on equal depth to the represented depth in the first rendered image with respect to the other region than the first region.
11. An ultrasound imaging method comprising:
generating a first rendered image from a volume data obtained by scanning an object;
displaying the first rendered image;
based on an edit request, generating a second rendered image by rendering the volume data in a first region to a different depth from a depth of a surface in the first region in the first rendered image to reveal a hidden region of the object hidden by an obstacle; and
displaying the second rendered image,
wherein the hidden region comprises at least a part of a face region of a fetus.
12. The method of claim 11, further comprising obtaining the volume data by scanning the object.
13. The method of claim 11, further comprising indicating at least one surface of the object in the first rendered image based on an amount of reflection of signal included in the volume data.
14. The method of claim 11, further comprising receiving the edit request with respect to the first rendered image.
15. The method of claim 11, further comprising generating the second rendered image by removing the obstacle from the first rendered image.
16. The method of claim 11, further comprising representing voxels included in a non-target volume as a reference color in the first rendered image and the second rendered image.
17. The method of claim 11, further comprising receiving a recover request for recovering the second rendered image, and displaying a recovered rendered image in which a edited part while generating the second rendered image from the first rendered image is recovered from the second rendered image based on the recover request.
18. The method of claim 11, further comprising, obtaining, based on the volume data, a depth data which indicates a depth to the surface of the object with respect to an image plane, and generating the first rendered image and the second rendered image based on the depth data.
19. The method of claim 11, further comprising obtaining the depth data based on an amount of reflection of signal included in the volume data.
20. The method of claim 11, further comprising generating a second rendered image which represents the volume data on deeper depth than a represented depth in the first rendered image with respect to a first region to reveal a hidden region of the object by an obstacle, and represents the volume data on equal depth to the represented depth in the first rendered image with respect to the other region than the first region.
US15/287,414 2012-03-07 2016-10-06 Image processing apparatus and method Abandoned US20170020485A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
US15/287,414 US20170020485A1 (en) 2012-03-07 2016-10-06 Image processing apparatus and method

Applications Claiming Priority (5)

Application Number Priority Date Filing Date Title
KR1020120023619A KR101329748B1 (en) 2012-03-07 2012-03-07 Image processing apparatus and operating method thereof
KR10-2012-0023619 2012-03-07
US13/789,628 US9256978B2 (en) 2012-03-07 2013-03-07 Image processing apparatus and method
US15/001,133 US10390795B2 (en) 2012-03-07 2016-01-19 Image processing apparatus and method
US15/287,414 US20170020485A1 (en) 2012-03-07 2016-10-06 Image processing apparatus and method

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
US15/001,133 Continuation US10390795B2 (en) 2012-03-07 2016-01-19 Image processing apparatus and method

Publications (1)

Publication Number Publication Date
US20170020485A1 true US20170020485A1 (en) 2017-01-26

Family

ID=47997003

Family Applications (3)

Application Number Title Priority Date Filing Date
US13/789,628 Active 2034-01-24 US9256978B2 (en) 2012-03-07 2013-03-07 Image processing apparatus and method
US15/001,133 Active US10390795B2 (en) 2012-03-07 2016-01-19 Image processing apparatus and method
US15/287,414 Abandoned US20170020485A1 (en) 2012-03-07 2016-10-06 Image processing apparatus and method

Family Applications Before (2)

Application Number Title Priority Date Filing Date
US13/789,628 Active 2034-01-24 US9256978B2 (en) 2012-03-07 2013-03-07 Image processing apparatus and method
US15/001,133 Active US10390795B2 (en) 2012-03-07 2016-01-19 Image processing apparatus and method

Country Status (4)

Country Link
US (3) US9256978B2 (en)
EP (1) EP2637142B1 (en)
JP (1) JP2013186905A (en)
KR (1) KR101329748B1 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20160007972A1 (en) * 2013-03-25 2016-01-14 Hitachi Aloka Medical, Ltd. Ultrasonic imaging apparatus and ultrasound image display method
KR20170068944A (en) * 2015-12-10 2017-06-20 삼성메디슨 주식회사 Method of displaying a ultrasound image and apparatus thereof
JP6546107B2 (en) * 2016-03-02 2019-07-17 株式会社日立製作所 Ultrasonic imaging system
WO2019045144A1 (en) * 2017-08-31 2019-03-07 (주)레벨소프트 Medical image processing apparatus and medical image processing method which are for medical navigation device
US10937207B2 (en) 2018-02-16 2021-03-02 Canon Medical Systems Corporation Medical image diagnostic apparatus, medical image processing apparatus, and image processing method
KR102362470B1 (en) * 2021-11-04 2022-02-15 (주)펄핏 Mehtod and apparatus for processing foot information

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434260B1 (en) * 1999-07-12 2002-08-13 Biomedicom, Creative Biomedical Computing Ltd. Facial imaging in utero
US20060012596A1 (en) * 2004-07-15 2006-01-19 Yoshiyuki Fukuya Data editing program, data editing method, data editing apparatus and storage medium
US20060241461A1 (en) * 2005-04-01 2006-10-26 White Chris A System and method for 3-D visualization of vascular structures using ultrasound
US20120157837A1 (en) * 2010-02-01 2012-06-21 Takayuki Nagata Ultrasound probe and ultrasound examination device using the same
US20130150719A1 (en) * 2011-12-08 2013-06-13 General Electric Company Ultrasound imaging system and method
US20170186228A1 (en) * 2010-06-07 2017-06-29 Gary Stephen Shuster Creation and use of virtual places

Family Cites Families (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7168084B1 (en) * 1992-12-09 2007-01-23 Sedna Patent Services, Llc Method and apparatus for targeting virtual objects
US6437784B1 (en) * 1998-03-31 2002-08-20 General Mills, Inc. Image producing system for three-dimensional pieces
US6559872B1 (en) * 2000-05-08 2003-05-06 Nokia Corporation 1D selection of 2D objects in head-worn displays
TW512284B (en) * 2001-05-24 2002-12-01 Ulead Systems Inc Graphic processing method using depth auxiliary and computer readable record medium for storing programs
US7841458B2 (en) * 2007-07-10 2010-11-30 Toyota Motor Engineering & Manufacturing North America, Inc. Automatic transmission clutch having a one-piece resin balancer
EP2222224B1 (en) * 2007-11-21 2017-06-28 Edda Technology, Inc. Method and system for interactive percutaneous pre-operation surgical planning
TWI352354B (en) * 2007-12-31 2011-11-11 Phison Electronics Corp Method for preventing read-disturb happened in non
JP5395611B2 (en) * 2009-10-16 2014-01-22 株式会社東芝 Ultrasonic diagnostic apparatus, image data generation apparatus, and control program for image data generation
CN102397082B (en) * 2010-09-17 2013-05-08 深圳迈瑞生物医疗电子股份有限公司 Method and device for generating direction indicating diagram and ultrasonic three-dimensional imaging method and system
US9110564B2 (en) * 2010-11-05 2015-08-18 Lg Electronics Inc. Mobile terminal, method for controlling mobile terminal, and method for displaying image of mobile terminal
US20120113223A1 (en) * 2010-11-05 2012-05-10 Microsoft Corporation User Interaction in Augmented Reality
FR2964775A1 (en) * 2011-02-18 2012-03-16 Thomson Licensing METHOD FOR ESTIMATING OCCULTATION IN A VIRTUAL ENVIRONMENT
US20130162518A1 (en) * 2011-12-23 2013-06-27 Meghan Jennifer Athavale Interactive Video System

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6434260B1 (en) * 1999-07-12 2002-08-13 Biomedicom, Creative Biomedical Computing Ltd. Facial imaging in utero
US20060012596A1 (en) * 2004-07-15 2006-01-19 Yoshiyuki Fukuya Data editing program, data editing method, data editing apparatus and storage medium
US20060241461A1 (en) * 2005-04-01 2006-10-26 White Chris A System and method for 3-D visualization of vascular structures using ultrasound
US20120157837A1 (en) * 2010-02-01 2012-06-21 Takayuki Nagata Ultrasound probe and ultrasound examination device using the same
US20170186228A1 (en) * 2010-06-07 2017-06-29 Gary Stephen Shuster Creation and use of virtual places
US20130150719A1 (en) * 2011-12-08 2013-06-13 General Electric Company Ultrasound imaging system and method

Also Published As

Publication number Publication date
EP2637142B1 (en) 2023-12-27
US20130235032A1 (en) 2013-09-12
JP2013186905A (en) 2013-09-19
EP2637142A2 (en) 2013-09-11
KR101329748B1 (en) 2013-11-14
US20160133043A1 (en) 2016-05-12
KR20130116399A (en) 2013-10-24
US9256978B2 (en) 2016-02-09
EP2637142A3 (en) 2017-12-13
US10390795B2 (en) 2019-08-27

Similar Documents

Publication Publication Date Title
US10390795B2 (en) Image processing apparatus and method
JP6147489B2 (en) Ultrasonic imaging system
US8202220B2 (en) Ultrasound diagnostic apparatus
KR102025756B1 (en) Method, Apparatus and system for reducing speckles on image
US8294706B2 (en) Volume rendering using N-pass sampling
US9111385B2 (en) Apparatus and method for rendering volume data
CN105007824A (en) Medical image processing device and image processing method
US20140176685A1 (en) Image processing method and image processing apparatus
JP2021520870A (en) Systems and methods for generating improved diagnostic images from 3D medical image data
US11037323B2 (en) Image processing apparatus, image processing method and storage medium
JPH08161520A (en) Method for extracting object part from three-dimensional image
WO2020173054A1 (en) Vrds 4d medical image processing method and product
US8526713B2 (en) Concave surface modeling in image-based visual hull
AU2019430258B2 (en) VRDS 4D medical image-based tumor and blood vessel ai processing method and product
AU2019431324B2 (en) VRDS 4D medical image multi-device Ai interconnected display method and product
CN111613301B (en) Arterial and venous Ai processing method and product based on VRDS 4D medical image
CN111613302B (en) Tumor Ai processing method and product based on medical image
JP2013013650A (en) Ultrasonic image processor
JP2019514545A (en) Imaging method and apparatus
JP2001297334A (en) Three-dimensional image constituting device

Legal Events

Date Code Title Description
STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION