Open Access
13 February 2021 Objective color calibration for manufacturing facial prostheses
Author Affiliations +
Abstract

Significance: Rehabilitation through facial prostheses’ main goal is to aid individual’s social reintegration as well as improving their quality of life. However, this treatment is not yet widely available in Brazil due to the lack of specialized clinics and the cost associated with the high number of necessary medical appointments until the final result. One of the steps in the process consists of measuring skin color, which is observer-dependent and may suffer from the effect of metamerism.

Aim: The methodology of our work aims to obtain a standard between different devices and greater fidelity to the color seen in person in order to reduce face-to-face iterations, reduce costs, and ensure better final results.

Approach: A physical device and a computer program were improved from previous projects. The changes included implementing the Thin-Plate Spline 3D algorithm for color calibration, in addition to an optional non-uniform illumination correction in the process. We also aim to improve the project’s accessibility using a colorimeter. The methodology and the algorithms were both compared to readings from direct skin measurements as well as color references.

Results: After processing, the ΔEab* metric between images from the same segments is taken with different cameras and conditions of illumination decreased from 18.81  ±  4.85 to 4.85  ±  1.72. In addition, when the images were compared to colorimetric readings of the skin, the difference went from 14.93  ±  4.11 to 5.85  ±  1.61. It was also observed that using a less expensive device did not impact the readings. The project is open source and available at Github.

Conclusions: The results demonstrate the possibility of applying the methodology to assist in the manufacturing of facial prostheses to decrease the total number of consultations, in addition to providing greater reliability of the final result.

1.

Introduction

Losses and malformations of the face, such as the loss of the ear, cause morphofunctional changes and, consequently, several psychosocial effects, such as depression, sadness, shame, anxiety, and anger.1,2 Thus in some cases, prosthetic facial rehabilitation can be established.

Alloplastic facial prostheses’ (Fig. 1) goal is to provide anatomical, functional, and aesthetic rehabilitation of facial losses and malformations3 helping in the social reintegration of the individual and, consequently, improving their quality of life.1,2,46 Even today, some factors, such as the scarcity of specialized clinics, the time, and money spent associated with the number of necessary consultations to obtain the final result and the large extension of the national territory, make it difficult for most patients to receive treatment in Brazil.

Fig. 1

Example of facial prosthesis (courtesy of the Maxillo Facial Prosthesis clinic at the School of Dentistry of University of São Paulo).

JBO_26_2_025002_f001.png

In order to decrease the number of consultations for each patient and, consequently, the time necessary for making facial prostheses, several studies propose the use of technologies.714 However, presential steps are still needed to carry out the anamnesis, impression or scanning of the defect, check the marginal adaptation, extrinsic coloration, and delivery of the final prosthesis.

In this process, a fundamental step is measuring the patient’s skin color. However, this determination is also observer-dependent, which may suffer from the effects of metamerism, characterized by the inability to distinguish differences in the color spectrum of objects. This is caused because, under different illuminations, the human eye’s trichromatic response can define similar colors.15,16 For this, there are commercial spectrophotometers capable of obtaining the color profile in an analytical way, already adapting it to the pigments to be used. However, they have a relatively high cost and work with imported pigments, making it difficult to be used especially in the public sector.

A possible low-cost solution to reduce subjectivity in the process of color measurement and repetition would be the use of calibration and restoration processing of images obtained by mobile devices. This method is already used in the area of medicine called teledermatology, in which the diagnosis is made through the remote analysis of the patient’s skin images since the fidelity of the color presented to the health professional is a fundamental factor in the diagnosis of skin cancer. Similarities between diagnostics using remote images after processing and face-to-face consultation with correlation close to 100% are found in the literature.1723

Among the challenges encountered in image processing, the existence of different color standards between devices and effects caused by illumination or degradation of the quality of the photograph can be highlighted. In a previous work,24 a methodology was proposed to calibrate and restore images in teledermatology in order to develop a system composed of a physical device (Fig. 2) and software capable of calibrating and evaluating images of skin taken from conventional and phone cameras, enabling a standard between different imaging devices. In this way, this project consists of adapting and improving the developed process focusing on the area of maxillofacial prostheses.

Fig 2

Demonstrative example of the previously developed methodology, which corrected color, perspective, and resolution of the image.

JBO_26_2_025002_f002.png

After assessing colors, the correct formulation is necessary so that the pigments used in the manufacturing of the prosthesis are consistent with the adjacent tissues.25,26 This work is part of a study that aims to generate a formulation of colors in the manufacture of facial prostheses from the obtained images.

2.

Methods

2.1.

Color Correction

The first fundamental point when dealing with color images is the fact that each device has a different representation of the RGB color system. In this way, images of the same objects taken with cameras or presented by different printers can cause different visual perceptions for an observer.

In order to distinguish such effects, the CIELAB (LAB) color space is proposed, which is obtained through spectrometric color readings and can be converted to the sRGB space, comparable to the commonly used RGB.2729 Within the LAB space, the ΔEab* metric, defined as the Euclidean distance between two LAB vectors, is generally used to compare color perception and the lower the result, the more difficult it is for an observer to distinguish between two colors, with 3.00 being a threshold value accepted for human differentiation.20,25,30

The algorithm used previously, based on the definition of a weight matrix A{3,4} that minimizes the mean square error between a set of reference values and the set of measured values, provides good results and captures the linear variations of the image.31 However, it is not able to determine non-linear corrections for the images, being one of its limitations.

In this calibration process, it is important to highlight the different RGB values present in devices, indicating the reference used in the piece of software. Figure 3 shows the different steps in the calibration process with 16 tonalities. In this work, a spectrophotometer (CM-3600A, Konica Minolta, Japan) was used.

Fig. 3

Diagram of the main steps and different RGB values associated with the evaluation of correction algorithm.

JBO_26_2_025002_f003.png

For this, the use of an algorithm capable of correcting both linear and non-linear transformations was proposed. This algorithm, called “3D Thin-Plate Spline” (TPS-3D) in the literature, corresponds to a mapping between points in different spaces in order to interpolate them minimizing the bending energy, defined by the integral of the sum of squared second derivatives.32 Since this mapping is done by a spline of a non-linear function, the algorithm is able to determine corrections that a simple matrix transformation would not.

In this algorithm, matrices W and A are calculated by

Eq. (1)

[WA]=[KPPT0{4,4}]1[V0{4,3}],
in which the matrices 0{4,4}  e  0{4,3} are zeros matrices with dimensions 4×4 and 4×3; P (and its transpose PT) is the matrix of the average RGB points of each of the 16 segments read (corresponding to step 4 of Fig. 3) with the addition of a 1-in. each line, indicated by

Eq. (2)

P=[1R1G1B11R2G2B21R16G16B16].

K is a matrix of U(r) for shape distortion by TPS-3D, defined by

Eq. (3)

K=[0U(r12)U(r1,15)U(r1,16)U(r2,1)0U(r2,15)U(r2,16)U(r15,1)U(r15,2)0U(r15,16)U(r16,1)U(r16,2)U(r16,15)0],U(ri,j)=2(ri,j2)log(ri,j+1020),
where

Eq. (4)

ri,j=(RiRj)2+(GiGj)2+(BiBj)2.
Finally, V is the matrix of RGB color references of the 16 points (corresponding to step 3 of Fig. 3), given by

Eq. (5)

V=[R1G1B1R16G16B16]

In this model, matrices W and A are used to correct a pixel with values Rmeas,Gmeas,Bmeas using the formulation given by

Eq. (6)

[RcalibGcalibBcalib]=[KmeasPmeas][WA],
with Kmeas and Pmeas defined as

Eq. (7)

Pmeas=[1RmeasBmeasGmeas],

Eq. (8)

Kmeas=[Umeas(r1)Umeas(r2)Umeas(r16)],
in which the function Umeas(ri) and ri are given by

Eq. (9)

Umeas(ri)=2(ri2)log(ri+1020  )

Eq. (10)

ri=(RmeasRi)2+(GmeasGi)2+(BmeasBi)2.

It is important to emphasize that the method of this work is used to approximate skin colors since they are closer to the reference model, thus reducing the expected error within the interpolated space. Likewise, colors distant from this standard can present greater relative errors.

In order to verify the improvement of the results from the change in algorithm, both were used in the same set of images obtained by printing the models on two different printers (Officejet Pro 8600 and Deskjet 2546, Hewlett-Packard, United States) and photographs taken by two cell phones [Moto G4, Motorola, United States; Pixi 4 (3.5ʺ), Alcatel, France] with different camera resolutions (13 and 5 MP), following the flowchart of Fig. 4.

Fig. 4

Flowchart for algorithm comparison with steps of image acquisition and reference values used for both cases.

JBO_26_2_025002_f004.png

2.2.

Illumination Correction

Then an attempt was made to obtain an illumination correction so as not to depend on uniform lighting to carry out the calibration process and, thus, obtain more reliable and robust results.

This process is commonly called background subtraction and consists of using a characteristic representation of the background of the image to perform the correction.33,34

In practice, it is necessary to estimate the image with illumination I(x,y) so that the following operation can be performed:

Eq. (11)

G(x,y)=F(x,y)I(x,y)·C,
in which G(x,y) is the illumination corrected image, F(x,y) is the original image, and C is a constant so that the luminosity in G is consistent with F, defined by:C=mean(F(x,y))·1mean(F(x,y)I(x,y))

First, it is necessary to point out that this correction must be made only in the L channel of the image’s LAB space—corresponding to the luminosity—and, if it was made in the other channels or using the RGB space, the color result could be changed unpredictably.

In this work, the estimation of I(x,y) was done using a Gaussian filter with standard deviation (σ) equal to half of the largest dimension of the whole image, being a low-pass filter responsible for maintaining global image characteristics.

The project with both changes was made available in full at https://github.com/yargo13/correct_image and is developed as a plugin for the ImageJ platform, however, it has not yet been migrated to version 2 of the framework.35

2.3.

Project Accessibility

In order to increase the accessibility of the project, one of the main points is to use a lower cost alternative to obtain the spectral values of colors in the printed device, which corresponds to step 3 of Fig. 3. For this, the use of a hand-held colorimeter (ColorMunki, X-Rite, Grand Rapids, United States) was verified.

The device consists of a spectrophotometer for use with imaging devices such as projectors and printers. Therefore, it has a lower cost when compared to devices that have a scientific purpose with a spectrum of broader range, not restricted to visible light. In addition, it is also more portable due to its dimensions.

The objective of this step was to verify the reproducibility of the readings made previously with the CM-3600A spectrophotometer in comparison to the ColorMunki Photo, both before calibrations. For this, using two devices printed by different printers, three readings of the LAB values of each color segment were made using the ColorMunki Photo and averaged afterward, then, they were compared to the measurements made with the spectrophotometer. A diagram regarding these steps is presented in Fig. 5. In this comparison, it is important to verify the illuminant settings used since the spectrophotometer showed readings for the D65 illuminant while the X-Rite product used the D50 illuminant. Therefore, for an effective comparison, the reflectance values read on the spectrophotometer were converted to LAB using the D50 illuminant. This conversion is established in the literature.36

Fig. 5

Diagram of device comparison analysis.

JBO_26_2_025002_f005.png

2.4.

Temporal Degradation

The first step to study the temporal deterioration of the model was to compare measurements made with a difference of 13 months using the same spectrophotometer. Based on this result, it would be possible to conclude if there were variations in the color spectrum, characteristic of the studied phenomenon.

In this comparison, the LAB values of all segments were read for three devices obtained from two printers, totaling six devices. The data obtained were compared in two ways.

  • 1. The values of similar segments were averaged in each device and the analysis is made considering 16 values for each printer.

  • 2. The 96 values of each segment (16 for each of the six devices) were analyzed individually with the previous values.

A diagram representing this analysis can be seen in Fig. 6. In this analysis, in addition to comparing the values using the ΔEab* metric, a statistical analysis was performed according to the Bland–Altman method.37

Fig. 6

Diagram of temporal degradation analysis with spectrophotometer, specifying the two comparisons used.

JBO_26_2_025002_f006.png

2.5.

Skin Measurements

During the last stage of the project, the calibration methodology developed was evaluated through a procedure, in which the final values of the LAB colors in the image were compared with values read using the ColorMunki colorimeter. With that, the intention was to verify the applicability of the system and shows possible improvements for future projects.

For this, photographs were taken containing skin segments using two cell phones and two calibration models under different illumination conditions. In each skin segment, regions of interest that were small enough to be specific, but that allowed a color measurement not affected by the marking, were manually established (Fig. 7).

Fig. 7

Comparison of the same skin segment in different illumination situations with regions of interest.

JBO_26_2_025002_f007.png

Even though the purpose of the current project is to use the algorithm to match face tones, the pictures on this part were taken of the forearm due to ease of operation and based on the assumption that skin tone variability between different anatomical parts does not present significant differences.

Each region of interest had its LAB component value measured in three situations:

  • (1) using the colorimeter, with the average between three readings;

  • (2) using the piece of software to average the LAB results of the manually selected region before the calibration;

  • (3) using the piece of software to average the LAB results of the manually selected region after the calibration.

For this study, measurement 1 will be adopted as a reference for a human observer. However, there is evidence in the literature that measurements using contact-based equipment are not very accurate because of the translucent character of the skin and because they affect local circulation, which directly impacts the color read.38,39

First, the agreement of the results after the calibration even under different illumination conditions was verified. For this, with five photographs varying the model of the cell phone and the color table, the conditions (2) and (3) were established. Then the measurements of images taken two by two were compared for each skin segment, checking the value of ΔEab* between each pair.

Then the comparison for conditions (2) and (3) was done in relation to condition (1) in the same way, with the reference being the average of five skin readings. This fact is justified by the high variability of measurements on the skin, thus seeking an estimated average value. In this case, comparisons for the three skin segments are made for each image.

3.

Results and Discussion

3.1.

Color Correction

The results comparing ΔEab* when using linear and TPS-3D algorithms are shown in Table 1 and an example in Fig. 8. Values of ΔEab*<3.00 can be considered unnoticeable for human observers.

Table 1

Values comparing the results of the linear algorithm and TPS-3D to the reference.

Linear algorithm calibration ΔEab*TPS-3D algorithm calibration ΔEab*
Camera 1 (5 MP) and printer 11.0130±0.05130.3827±0.0405
Camera 1 (5 MP) and printer 21.0321±0.19570.3599±0.0099
Camera 2 (13 MP) and printer 11.6000±0.38400.4332±0.0632
Camera 2 (13 MP) and printer 21.4456±0.33080.5183±0.1155
Total1.2727±0.35460.4235±0.0866

Fig. 8

(a) Original image; (b) calibration using linear algorithm; (c) calibration using TPS-3D algorithm; and (d) logarithm of the RGB difference between calibrations.

JBO_26_2_025002_f008.png

Also when executed on a desktop with Intel i5-4460 (3.20 GHz 4×) processor, the time difference was on average 13.3495±4.1359  s, which represents 22.28±24.18 percentual points of the time for the whole processing. The high variability in percentage is due to the resolution correction stage, which can significantly increase the total time.

As seen, the newly implemented algorithm performed better for color correction, decreasing both the final error and the standard deviation. In addition, as shown in this figure, there was little difference between some skin segments when comparing the two algorithms. Thus TPS-3D consists of a viable alternative without adding high computational complexity.

3.2.

Illumination Correction

For the illumination correction part, the same images were preprocessed using the methodology described and then were calibrated using the TPS-3D algorithm. The consolidated results are presented at Table 2 and an example of successful calibration is shown in Fig. 9.

Table 2

Results for ΔEab* after illumination correction compared to the reference.

ΔEab* after illumination correction
Camera 1 (5 MP) and printer 10.4567±0.0780
Camera 1 (5 MP) and printer 20.3843±0.0060
Camera 2 (13 MP) and printer 10.3995±0.0388
Camera 2 (13 MP) and printer 20.3876±0.0132
Average0.4055±0.0488

Fig. 9

(a) Original image; (b) estimative of I(x,y) using a Gaussian filter; and (c) illumination corrected image using the illumination correction procedure.

JBO_26_2_025002_f009.png

Comparing the results with those presented in Table 1, it is possible to see that the average final values were lower but not in all cases. Thus illumination correction is best used when it is checked for non-uniformity and should be placed as an option to the user when these problems are perceived.

3.3.

Project Accessibility

The results for the comparison of the two devices are shown in Fig. 10 and contain the analysis for the components L*, a*, and b* of the values read for the same illuminant without calibration.

Fig. 10

Comparison of measurements on CM-3600A converted to D50 illuminant and ColorMunki Photo.

JBO_26_2_025002_f010.png

As seen, the similarity between the readings was high, obtaining an average ΔEab* equivalent of 0.55±1.38, in addition to low limits of agreement. With that, it can be concluded that ColorMunki Photo represents a lower cost and greater portability equipment, without jeopardizing the measurement reliability.

3.4.

Temporal Degradation

Due to acquisition problems, only 92 out of the 96 segments were used for both proposed comparisons, whose results for each analysis are shown in Fig. 11.

Fig 11

Difference for L, a*, and b* components of readings from the same device taken with an interval of 13 months.

JBO_26_2_025002_f011.png

It is possible to observe that, in this analysis, the variations presented for the devices in the comparison (1) were slightly high, however, when analyzed together, considering the limits of agreement, they indicate that the printed models did not present significant variation in time.

Based on the averages and standard deviations obtained, it is also seen that the two comparisons present similar results, pointing to the fact that the average of individual variations does not decrease the global variation.

Finally, analyzing the values of ΔEab*, in comparison (1) we obtained 1.21±0.25, whereas in comparison (2) the result was 1.34±0.25, both of which are low and reinforce the conclusions reached.

3.5.

Skin Measurements

The results for the first part of this section, comparing images from the same skin segments before and after calibration, are shown in Fig. 12.

Fig. 12

Comparison of ΔEab* on segments of different images before and after calibration using the TPS-3D algorithm.

JBO_26_2_025002_f012.png

It is possible to observe that the same skin segments before calibration could have differences as high as 30 units in the metric, with an average of 18.81±4.85. After processing, this value dropped to 4.85±1.72, indicating greater similarity between the corrected colors, even when they were not part of the set used in the interpolation algorithm.

Despite this fact, the average value was still high when compared to thresholds considered for human perception (ΔEab*<3.00).20 Some possible reasons may be the presence of local shadows in the segments caused by adjacent parts or the analyzed skin tone is not covered in the color space used in the calibration. Possible future improvements include resizing the device as well as in-depth analysis of the colors to be included in the device considering distortions during the printing process.

With the consistency of the values after the calibration, the comparison of the results in relation to the reference was done in the same way. The results are shown in Fig. 13.

Fig. 13

Comparison of ΔEab* on segments with skin reference before and after calibration using the TPS-3D algorithm.

JBO_26_2_025002_f013.png

In this analysis, the average dropped from 14.93±4.11 before calibration to 5.85±1.61 after the process, making the reading more reliable in relation to the objective, even though this result may be improved in the future.

4.

Conclusion

In this paper, the calibration methodology developed shows significant improvements in the measurement of skin color compared to the procedure without calibration. This fact can be seen directly in the greater similarity of the color obtained after processing images with different cameras and illumination conditions, as well as being closer to that obtained in direct skin color reading.

It is also seen that the process can be carried out at a lower cost of implantation through reliable measurements with simpler equipment and without the observation of significant deterioration of the colors printed over time. In addition, the implemented algorithms enable greater robustness of the procedure and consistent results between executions. Therefore, a distributed calibrated device obtained from only one printer and a colorimeter or spectrophotometer can be created and used together with different phone cameras and scenarios to yield consistent results. Also only a computer is necessary to run the software, which in the future can also be used as a phone application or even a web service.

As possibilities for future improvements, the study of the colors used in the physical device can be cited so that the range of skin tones is as wide as possible considering the distortions obtained in printing. With this, there is also the possibility of measuring skin tones using non-contact equipment for results with greater fidelity.

Finally, for a good calibration process, photography with uniform illumination (or execution of the illumination correction step if variation is observed) and without shadow regions is recommended. Thus better results are found when the physical device is perpendicular to the light source. If the image obtained appears to present great color distortions, it is recommended to take a new photograph since the degrading factors mentioned may have affected the processing.

Disclosure

The authors certify that they have no affiliations with or involvement in any organization or entity with any financial interest (such as honoraria; educational grants; participation in speakers’ bureaus; membership, employment, consultancies, stock ownership, or other equity interest; and expert testimony or patent-licensing arrangements), or non-financial interest (such as personal or professional relationships, affiliations, knowledge, or beliefs) in the subject matter or materials discussed in this manuscript.

Acknowledgments

This study was financed in part by the Conselho Nacional de Desenvolvimento Científico e Tecnológico–CNPq, Grant No. 305610/2018-0.

Code, Data, and Materials Availability

The code for this project is made available at https://github.com/yargo13/correct_image. Data and materials supporting the results reported in the manuscript can be requested by contacting the corresponding author.

References

1. 

T. A. C. de Amaro, R. Belfort and C. M. Erwenne, “A comparative psychological study of enucleated patients due to trauma or to ocular tumor, all using prosthesis,” Acta Oncol. Bras, 20 (4), 138 –142 (2000). Google Scholar

2. 

M. S. O. do Cardoso et al., “Psychosocial implications for patients with loss of the eyeball,” Rev. Cir. Traumatol, 7 (1), 79 –84 (2007). Google Scholar

3. 

A. C. L. Saboia, B. S. C. Mattos, Reabilitação Protética Craniomaxilofacial, 107 –118 Santos, São Paulo (2013). Google Scholar

4. 

M. S. O. do Cardoso et al., “Importance of prosthetic nasal rehabilitation: a case report,” Rev. Cir. Traumatol. Buco-Maxilo-Fac, 6 (1), 43 –46 (2006). Google Scholar

5. 

T.-L. Chang et al., “Treatment satisfaction with facial prostheses,” J. Prosthet. Dent., 94 (3), 275 –280 (2005). https://doi.org/10.1016/j.prosdent.2005.06.002 JPDEAT 0022-3913 Google Scholar

6. 

J. P. Wiens, R. L. Wiens, “Psychological management of the maxillofacial prosthetic patient,” Clinical Maxillofacial Prosthetics, 1 –14 Quintessence Publishing(2001). Google Scholar

7. 

C.-M. Cheah et al., “Integration of laser surface digitizing with CAD/CAM techniques for developing facial prostheses. Part 1: Design and fabrication of prosthesis replicas,” Int. J. Prosthodont., 16 (4), 435 –441 (2003). Google Scholar

8. 

T. Jiao et al., “Design and fabrication of auricular prostheses by CAD/CAM system,” Int. J. Prosthodont., 17 (4), 460 –463 (2004). Google Scholar

9. 

L. H. Chen, S. Tsutsumi and T. Iizuka, “A CAD/CAM technique for fabricating facial prostheses: a preliminary report,” Int. J. Prosthodont., 10 (5), 467 –472 (1997). Google Scholar

10. 

G. Wu et al., “Selective laser sintering technology for customized fabrication of facial prostheses,” J. Prosthet. Dent., 100 (1), 56 –60 (2008). https://doi.org/10.1016/S0022-3913(08)60138-9 JPDEAT 0022-3913 Google Scholar

11. 

C. Chee Kai et al., “Facial prosthetic model fabrication using rapid prototyping tools,” Integr. Manuf. Syst., 11 (1), 42 –53 (2000). https://doi.org/10.1108/09576060010303668 IMSYEY 0957-6061 Google Scholar

12. 

M. Tsuji et al., “Fabrication of a maxillofacial prosthesis using a computer‐aided design and manufacturing system,” J. Prosthodont., 13 (3), 179 –183 (2004). https://doi.org/10.1111/j.1532-849X.2004.04029.x Google Scholar

13. 

L. Ciocca et al., “New protocol for construction of eyeglasses-supported provisional nasal prosthesis using CAD/CAM techniques,” J. Rehabil. Res. Dev., 47 (7), 595 (2010). https://doi.org/10.1682/JRRD.2009.11.0189 JRRDEC 0748-7711 Google Scholar

14. 

L. Ciocca and R. Scotti, “CAD-CAM generated ear cast by means of a laser scanner and rapid prototyping machine,” J. Prosthet. Dent., 92 (6), 591 –595 (2004). https://doi.org/10.1016/j.prosdent.2004.08.021 JPDEAT 0022-3913 Google Scholar

15. 

M. E. L. Leow et al., “Metamerism in aesthetic prostheses under three standard illuminants—TL84, D65, and F,” Prosthet. Orthot. Int., 23 (2), 174 –180 (1999). https://doi.org/10.3109/03093649909071630 POIND7 0309-3646 Google Scholar

16. 

D. B. Judd and G. Wyszecki, Color in Business, Science and Industry, Wiley, New York (1952). Google Scholar

17. 

H. A. Miot, “Desenvolvimento e sistematização da interconsulta dermatológica a distância,” Universidade de São Paulo(2005). Google Scholar

18. 

C. S. Silva et al., “Teledermatologia: correlação diagnóstica em serviço primário de saúde,” An. Bras. Dermatol., 84 (5), 489 –493 (2009). https://doi.org/10.1590/S0365-05962009000500007 Google Scholar

19. 

R. P. Braun et al., “Telemedical wound care using a new generation of mobile telephones,” Arch. Dermatol., 141 (2), 254 –258 (2005). https://doi.org/10.1001/archderm.141.2.254 Google Scholar

20. 

Y. V. Haeghen et al., “An imaging system with calibrated color image acquisition for use in dermatology,” IEEE Trans. Med. Imaging, 19 (7), 722 –730 (2000). https://doi.org/10.1109/42.875195 ITMID4 0278-0062 Google Scholar

21. 

S. Kroemer et al., “Mobile teledermatology for skin tumour screening: diagnostic accuracy of clinical and dermoscopic image tele-evaluation using cellular phones,” Br. J. Dermatol., 164 (5), 973 –979 (2011). https://doi.org/10.1111/j.1365-2133.2011.10208.x BJDEAZ 0007-0963 Google Scholar

22. 

S. A. Lamel et al., “Application of mobile teledermatology for skin cancer screening,” J. Am. Acad. Dermatol., 67 (4), 576 –581 (2012). https://doi.org/10.1016/j.jaad.2011.11.957 JAADDB 0190-9622 Google Scholar

23. 

N. V. Matveev and B. A. Kobrinsky, “Automatic colour correction of digital skin images in teledermatology,” J. Telemed. Telecare, 12 (3_suppl), 62 –63 (2006). https://doi.org/10.1258/135763306779379978 Google Scholar

24. 

Y. V. Tessaro and S. S. Furuie, Anais do XXV Congresso Brasileiro de Engenharia Biomédica, 983 –986 (2016). Google Scholar

25. 

R. D. Paravina et al., “Color difference thresholds of maxillofacial skin replications,” J. Prosthodont., 18 (7), 618 –625 (2009). https://doi.org/10.1111/j.1532-849X.2009.00465.x Google Scholar

26. 

X. Hu, A. B. Gilbert and W. M. Johnston, “Interfacial corrections of maxillofacial elastomers for Kubelka–Munk theory using non-contact measurements,” Dent. Mater., 25 (9), 1163 –1168 (2009). https://doi.org/10.1016/j.dental.2009.04.003 Google Scholar

27. 

K. McLaren, “The development of the CIE 1976 (L* a* b*) uniform colour space and colour-difference formula,” J. Soc. Dye. Colour., 92 (9), 338 –341 (2008). https://doi.org/10.1111/j.1478-4408.1976.tb03301.x JSDCAA 0037-9859 Google Scholar

28. 

“IEC 61966-2-1:2003 Multimedia systems and equipment—Colour measurement and management—Part 2-1: Colour management—Default RGB colour space—sRGB,” (2003). Google Scholar

29. 

Commission Internationale de l’Éclairage, “CIE 15:2004 Colorimetry,” (2004). Google Scholar

30. 

M. Stokes et al., “A standard default color space for the Internet—sRGB,” in Color Imaging Conf., (1996). Google Scholar

31. 

J. Marguier et al., “Assessing human skin color from uncalibrated images,” Int. J. Imaging Syst. Technol., 17 (3), 143 –151 (2007). https://doi.org/10.1002/ima.20114 IJITEG 0899-9457 Google Scholar

32. 

P. Menesatti et al., “RGB color calibration for quantitative image analysis: the ‘3D Thin-Plate Spline’ warping approach,” Sensors (Switzerland), 12 (6), 7063 –7079 (2012). https://doi.org/10.3390/s120607063 Google Scholar

34. 

J. Russ, The Image Processing Handbook, 5th ed.CRC Press(2006). Google Scholar

35. 

C. T. Rueden et al., “ImageJ2: ImageJ for the next generation of scientific image data,” BMC Bioinf., 18 (1), 529 (2017). https://doi.org/10.1186/s12859-017-1934-z BBMIC4 1471-2105 Google Scholar

36. 

Commission Internationale de l’Éclairage, “Commission internationale de l’Eclairage proceedings,” Cambridge (1931). Google Scholar

37. 

J. Martin Bland and D. Altman, “Statistical methods for assessing agreement between two methods of clinical measurement,” Lancet, 327 (8476), 307 –310 (1986). https://doi.org/10.1016/S0140-6736(86)90837-8 LANCAO 0140-6736 Google Scholar

38. 

D. J. Gozalo-Diaz et al., “Measurement of color for craniofacial structures using a 45/0-degree optical configuration,” J. Prosthet. Dent., 97 (1), 45 –53 (2007). https://doi.org/10.1016/j.prosdent.2006.10.013 JPDEAT 0022-3913 Google Scholar

39. 

X. Hu, W. M. Johnston and R. R. Seghi, “Measuring the color of maxillofacial prosthetic material,” J. Dent. Res., 89 (12), 1522 –1527 (2010). https://doi.org/10.1177/0022034510378012 JDREAF 0022-0345 Google Scholar

Biography

Yargo Vó Tessaro received his bachelor’s degree in eletrical engineering with emphasis on automation and control from University of São Paulo. Currently, he is a software development engineer. His areas of interest include medical image processing and educational technologies.

Sergio Furuie received his bachelor’s degree in electronic engineering from Aeronautics Institute of Technology and PhD in electronic systems from University of São Paulo (USP), Brazil, in 1990. He was head of Research and Development Group, Division of Informatics at the Heart Institute from 1995 to 2008. Since 2008 he is a full professor at the School of Engineering, USP. His areas of interest include tomography, ultrasound, medical image processing, and modeling of biological systems.

Denise Moral Nakamura is a PhD student at the School of Dentistry of University of Sao Paulo. She received her DDS and MSc degrees from the School of Dentistry of University of Sao Paulo in 2011 and 2014, respectively. Currently, she is a member of the Technical Chamber of Maxillofacial Prosthesis of the Regional Council of Dentistry of Sao Paulo (CROSP). She is interested in facial prosthetics, color formulation, and low-cost technologies.

CC BY: © The Authors. Published by SPIE under a Creative Commons Attribution 4.0 Unported License. Distribution or reproduction of this work in whole or in part requires full attribution of the original publication, including its DOI.
Yargo V. Tessaro, Sergio S. Furuie, and Denise M. Nakamura "Objective color calibration for manufacturing facial prostheses," Journal of Biomedical Optics 26(2), 025002 (13 February 2021). https://doi.org/10.1117/1.JBO.26.2.025002
Received: 9 November 2020; Accepted: 27 January 2021; Published: 13 February 2021
Lens.org Logo
CITATIONS
Cited by 2 scholarly publications.
Advertisement
Advertisement
KEYWORDS
Calibration

Skin

Manufacturing

Image segmentation

Printing

Cameras

Image processing

Back to Top