Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Microgenetic optimization algorithm for optimal wavefront shaping

Open Access Open Access

Abstract

One of the main limitations of utilizing optimal wavefront shaping in imaging and authentication applications is the slow speed of the optimization algorithms currently being used. To address this problem we develop a microgenetic optimization algorithm (μGA) for optimal wavefront shaping. We test the abilities of the μGA and make comparisons to previous algorithms (iterative and simple-genetic) by using each algorithm to optimize transmission through an opaque medium. From our experiments we find that the μGA is faster than both the iterative and simple-genetic algorithms and that both genetic algorithms are more resistant to noise and sample decoherence than the iterative algorithm.

© 2015 Optical Society of America

1. Introduction

The ability to use wavefront shaping to control the optical properties of opaque media was first predicted by Freund in 1990 [1] and demonstrated experimentally in 2007 by Vellekoop and Mosk, who used a liquid crystal on silicon spatial light modulator (LCOS-SLM) to shape the wavefront of a laser such that the beam was focused through the material [2]. Since then the technique of using optimal wavefront shaping in conjunction with a scattering material has been used for polarization control [3,4], spectral control of light [58], enhanced fluorescence microscopy [9,10], perfect focusing [9,11], compression of ultrashort pulses [12,13], spatio-spectral control of random lasers [1417], and enhanced astronomical/biological imaging [18,19]. Also new techniques have been developed to aid in wavefront shaping, most notably photoacoustic wavefront shaping [2025].

One of the key features of all of these applications is the use of an optimization algorithm to search for the optimal wavefront. The choice of which algorithm to use is crucial and determines the speed and efficiency of optimization as well as the algorithm’s resistance to noise and decoherence effects [2629]. In this paper we will focus on the software (algorithm) used in optimization. However, we note here that the hardware used also plays a major role in determining the speed of optimization, as wavefront optimization is fundamentally limited by the refresh rate of both the SLM and feedback detector. Thus far there have been three main classes of algorithms used in the literature: iterative (IA) [2,26], partitioning [26], and simple-genetic (SGA) [4,27]. While each method is found to have different benefits and limitations, one of the common limitations is slow optimization speed. In order to obtain faster optimization speeds we develop a microgenetic algorithm (μGA) for wavefront optimization as μGAs are known, from other applications [30,31], to be faster than both the IA and SGA.

In this study we first describe, in detail, the operation of the iterative, simple-genetic, and microgenetic algorithms. We then discuss the basic differences between the algorithms and how these differences lead to the μGA being the fastest algorithm. Finally we perform an experimental comparison of the three algorithms using an optimal transmission experiment [29] to test the algorithms’ speed, efficiency, and resistance to noise and decoherence.

2. Theory

A. Background

Focusing light through opaque media is inherently an optimization problem in which we seek to find a specific solution (phase pattern) that optimizes the evaluation of a function (light focused in a given area). Optimization problems have been extensively studied and numerous techniques have been developed to approach optimization [32]. One of the simplest approaches to optimization problems is the brute force method in which every possible solution is systematically evaluated in order to find the optimal solution [32]. While this technique is guaranteed to work in the absence of noise, it is extremely time consuming—becoming unfeasible as the number of dimensions in the solution space increases—and ineffective as the signal-to-noise ratio decreases [27,28].

A different approach to optimization problems, which draws inspiration from nature, are so-called evolutionary algorithms (EAs) [3335]. EAs are stochastic algorithms in which solutions are randomly generated, evaluated, and then modified until the best solution is found. One of the most well-known classes of EAs are genetic algorithms (GAs) and they function based on the same principles as natural selection [33,34,3638]. In GAs a population of solutions is randomly generated and then evaluated to determine how well each member solves the optimization problem. The functional response of evaluating the ith population member is called the member’s fitness, fi. Once the members of a generation are ranked by their fitness scores, a new generation is produced through a process called crossover. During crossover the algorithm chooses two different “parent” solutions using a fitness-based selection method and then the two parents are combined such that the new “child” solution contains half the values of one parent, and half the values of the other parent, similar to biological reproduction. This process repeats until a new generation of solutions is produced and then the algorithm restarts with the new generation.

The standard implementation of GAs has already been applied to light transmission optimization [4,27] with impressive results, especially in regards to the algorithm’s resistance to noise [27]. While the standard-genetic algorithm is an improvement over the brute force iterative method, it still requires a large number of function evaluations to work, which is time consuming. An alternative, and faster, GA implementation is the μGA [30,31]. The μGA primarily differs from the standard-genetic algorithm in its population size, required use of elitism (carrying over the most fit population from a generation to the next), crossover technique, and use of population resets instead of mutation. In the following sections we will describe the SGA and μGA (as well as the IA) in detail and expand on the differences between the two genetic algorithms.

B. Iterative Algorithm

The simplest optimization algorithm for controlling transmission is the IA [2,26]. To begin the IA bins the SLM pixels into N bins and then, starting with the first bin, the IA changes the bin’s phase value in M steps with a step size of 2π/M until the phase value giving the largest target intensity is found. This optimal value is then set for the bin and the procedure continues through all the bins until a phase front giving peak intensity is found. This optimization scheme is simple to implement (requiring minimal coding) and accurately investigates the solution space. On the other hand, the algorithm requires M×N function evaluations, which quickly becomes impractical as the number of bins increases, and the IA’s efficiency is severely limited by the persistence time of the sample and signal-to-noise ratio of the system [26,28,29].

C. Simple Genetic Algorithm

The first GA we use is a SGA, which is similar to those previously used by Conkey et al. [27] and Guan et al.[4]. The algorithm proceeds as follows:

  • 1. Generate 30 binned random phase masks using a random number generator.
  • 2. Evaluate the fitness of each phase mask.
  • 3. Pass the top 15 phase masks with highest fitness to the next generation and discard the bottom 15.
  • 4. Randomly choose two members of the fittest 15 (A and B where AB) with the probability of being chosen given by
    Pi=fijWfj,
    where jWfj is the net fitness of the population.
  • 5. Combine A and B into a new phase pattern C, where each bin in C has a 50% chance of being the value from A and a 50% chance of being from B.
  • 6. During crossover, at each bin in A and B, the two parent bin values are compared and if they are the same, a running counter is incremented. After each pair of bins has been compared this counter contains the number of bins that have the same value between the two parent images. This value is then divided by the total number of bins giving the similarity score. If the similarity scores of all parents in a generation are greater than 0.97 the bottom 15 phase masks are dropped and randomly regenerated, while the top 15 are kept.
  • 7. After crossover and the similarity test, the bins in C are mutated at a rate of 0.005, which means that a given bin has a 0.5% chance of being assigned a new random value.
  • 8. Steps 4–7 are repeated until the last 15 members of the new population are created.
  • 9. The fitness of the new population is measured and steps 2–8 are repeated until a stop criteria is reached (set fitness or set number of generations).

D. Microgenetic Algorithm

The second class of GAs we consider are μGAs. μGAs have a similar structure to SGAs, but are designed to work with smaller population sizes and therefore require fewer function evaluations than SGAs. For instance, we typically use a population of 30 phase masks for the SGA, and five phase masks for the μGA.

Our μGA is structured as follows:

  • 1. Generate five binned random phase masks using a random number generator.
  • 2. Evaluate the fitness of each phase mask and rank them 1–5, with 1 having the highest fitness and 5 having the lowest fitness.
  • 3. Discard the two phase masks with the lowest fitness scores and pass the highest-scoring phase mask to the next generation.
  • 4. Then perform crossover on the top three ranked phase masks using the following combinations: rank 1 with rank 2, rank 1 with rank 3, and then rank 2 with rank 3 is performed twice.
  • 5. During crossover, at each bin in the parent phase masks, the bin values of both parents are compared and if they are the same, a running counter is incremented. After each pair of bins has been compared this counter will contain the number of bins that have the exact same value between the two parent images. This value is then divided by the total number of bins giving the similarity score. If the similarity scores of all parents in a generation are greater than 0.97 the most fit phase mask is kept and the bottom four phase masks are randomly regenerated.
  • 6. Evaluate the fitness of the new generation.
  • 7. Repeat steps 2–6. This procedure loops until a stop criteria is reached (set fitness or set number of generations)

E. Differences between the SGA and μGA

While the structures of the SGA and the μGA are similar—both using randomly generated populations and crossover to explore the solution space—there are four main differences between the two algorithms:

  • 1. The μGA works with a smaller population size (five for our algorithm) than a SGA (our SGA uses 30).
  • 2. When choosing parents to cross over a μGA uses a tournament style selection [39,40], while an SGA can use a variety of different methods, such as fitness proportionate selection [35], stochastic universal sampling [41], tournament selection [39,40], and reward-based selection [42,43]. Our SGA uses fitness proportionate selection.
  • 3. A μGA requires the use of elitism, while an SGA does not require it.
  • 4. SGAs use mutation in order to avoid local maxima in the fitness landscape, while μGAs do not. μGAs rely entirely on restarting every time the population reaches a local maxima (where the similarity score is greater than 0.97). This is step 5 in the μGA.

The smaller population size (and lack of mutation) allows the μGA to optimize more quickly [30,31] as each generation has fewer function evaluations than the SGA and the μGA skips the mutation step. However, since the population size is smaller, the μGA requires the use of elitism to operate and continue to approach an optimal solution [30,31]. Additionally, the small population size and lack of mutation leads to the μGA often drifting toward local maxima, which requires the algorithm to restart in order to escape a localized solution.

3. Experimental Method

For testing the different algorithms we use two different types of opaque media: commercially available ground glass and ZrO2 nanoparticle (NP) doped polyurethane nanocomposites. To prepare the nanocomposites we first synthesize ZrO2 NPs using forced hydrolysis [44] with a calcination temperature and time of 600°C and 1 h, respectively. Figure 1 shows an SEM image of the NPs at two magnifications and Fig. 2 shows the size distribution of the NPs, with the average diameter being 195±32nm. Once the NPs are synthesized we prepare an equimolar solution of tetraethylene glycol and poly(hexamethylene diisocyanate) with a 0.1 wt. % di-n-butylin dilaurate catalyst and centrifuge it to remove air bubbles. The NPs are then added and the viscous mixture is stirred and poured into a circular die and left to cure overnight at room temperature.

 figure: Fig. 1.

Fig. 1. SEM images of ZrO2 nanoparticles at magnifications of 20000× and 200000×. The particles are found to be spherical with diameters ranging between 80 and 300 nm.

Download Full Size | PDF

 figure: Fig. 2.

Fig. 2. Histogram of NP diameters measured from SEM measurements. The mean diameter is 195±32nm.

Download Full Size | PDF

The two sample types, ground glass and ZrO2/PU nanocomposite, represent two separate stability and scattering regimes. The ground glass is found to have a scattering length of 970.7 μm and a persistence time of the order of days, while the nanocomposite is found to have a scattering length of 11.2 μm and a persistence time of 90min. This means that the ground glass allows for more stable tests, due to its long persistence time, and larger enhancements due to its long scattering length [29]. On the other hand, the nanocomposite allows for tests of the algorithms’ noise and decoherence resistance as the persistence time is much shorter and noise is found to have a larger effect on samples with smaller scattering lengths [29].

Once the samples are prepared, we use an LCOS-SLM-based optimal transmission setup to compare the different algorithms. The primary components of the setup are a Coherent Verdi V10 Nd:YVO4 laser, a Boulder Nonlinear Systems high-speed LCOS-SLM, and a Thorlabs CMOS camera. See [29] for more detail. For minimal noise applications we use the ground glass sample and operate the Verdi at 10 W where the laser is most stable (optical noise <0.03%rms), and for measuring the algorithms’ resistance to noise/decoherence we use the nanocomposite sample and operate the Verdi at 200 mW where the emission is less stable (optical noise 1%).

4. Results and Discussion

A. Optimization Speed

We begin by first measuring the average intensity enhancement for the IA using bins of side length b=16px, M=32 phase steps, and a ground glass sample. Using the enhancement determined from the IA as the GA stop conditions, we measure the number of iterations required for each algorithm to reach the set enhancement. We use the same bin size for all three algorithms and perform five run averaging of the enhancement curves. Figure 3 shows the enhancement as a function of iteration for each algorithm with the target enhancement, ηT, marked by the dashed line. From the figure we find that both the SGA and μGA are faster than the IA, with the μGA taking 2870 iterations, the SGA taking 10,194 iterations, and the IA taking 11,922 iterations. While the SGA is found to be only slightly faster than the IA (1.17×), the μGA is found to be 4.15× faster than the IA, and 3.55× faster than the GA for this experimental configuration. Further tests with other configurations of samples and bin sizes show that the actual speed enhancement varies on the experimental parameters, but for all trials, where the three algorithms can reach the same enhancement, the μGA is the fastest algorithm. This result is expected given the μGA’s use of a smaller population size [30,31].

 figure: Fig. 3.

Fig. 3. Enhancement as a function of the number of iterations for the IA, SGA, and μGA, where the stop condition for the GAs is an enhancement of 55. The μGA is found to be the fastest algorithm, with the SGA only being slightly faster than the IA.

Download Full Size | PDF

B. Resistance to Noise and Decoherence

In addition to being faster than the IA, the GAs are found to be highly resistant to the effects of both noise and sample decoherence [27], which are detrimental to the effectiveness of the IA [26,28,29]. To demonstrate the robustness of the GAs, we use a ZrO2 NP-doped polyurethane sample with a short persistence time [29] and operate the probe laser at low power where the signal-to-noise ratio is smallest. First we optimize transmission using the IA with a bin length of b=8 and M=32 phase steps, with the results being poor (shown in Fig. 4) with a peak enhancement of 3.9. Next we test both the SGA and μGA with bins of length b=8 and find drastically improved results from the IA with the μGA reaching an enhancement of 66 and the SGA reaching an enhancement of 138.5, as shown in Fig. 4 [45].

 figure: Fig. 4.

Fig. 4. Enhancement as a function of normalized iteration using a noisy laser, short persistence time sample, and 4096 bins. The IA only results in a small enhancement of 3.9, while the μGA obtains an enhancement of 66 and the SGA an enhancement of 138.5.

Download Full Size | PDF

Figure 4 also reveals the general functional form of the enhancement for each algorithm with the IA consisting of many sharp steps, the μGA behaving exponentially, and the SGA behaving as a power function. From Fig. 4 we find that the μGA rapidly increases and then saturates, with further iterations not adding to the enhancement, while the SGA continues to improve until the maximum number of iterations is reached. This implies that for the same number of iterations and bin size the SGA can attain larger enhancements than the μGA.

C. μGA and SGA Comparison

Given the differences in the performances of the μGA and SGA in the above sections, we measure the effect of bin size on the two GA’s speed and effectiveness. First we measure how bin size affects the optimization speed by using the two GAs to optimize light transmission through a polymer sample to an enhancement of η=43. A target enhancement of η=43 is chosen as both GAs can achieve that level of enhancement easily. Eventually the μGA is found to saturate, while the SGA continues to attain higher enhancements (discussed below). This means that in order to compare the speed performance of the two GAs we need to use a target enhancement that can be reached by both algorithms. Figure 5 shows the average number of iterations required for the two algorithms to reach the set enhancement. Only bin’s of size b16 are shown as larger bin sizes are found to be unable to reach an enhancement of 43. For all bin sizes tested the μGA is found to be faster than the SGA, with a bin size of b=8px resulting in the fastest optimization for both algorithms.

 figure: Fig. 5.

Fig. 5. Number of iterations required to reach an enhancement of η=43 as a function of the total number of bins for the SGA and μGA. Bins with a side length greater than 16 px are not shown as they are found to be unable to optimize to the target enhancement within a reasonable time frame.

Download Full Size | PDF

In addition to measuring the optimization speed of the two GAs, we also consider the maximum enhancement achievable given a fixed number of iterations. Figure 6 shows the peak enhancement as a function of the number of bins for the GAs running 12,500 iterations. For bins of size b32, the two GAs are found to produce the same enhancement within uncertainty. However, as the bin size decreases the two algorithms diverge with the SGA producing greater enhancements than the μGA. The divergence in the performance of the two GAs occurs as the SGA’s larger population is better able to explore the solution domain than the μGA for small bin sizes. We also find from Fig. 6 that for the maximum number of bins (b=1) the enhancement is much lower than the enhancement for b=2. This result is slightly counterintuitive, as having more bins should allow for a more accurate representation of the optimal phase mask. However, by increasing the number of bins we also increase the solution space size, which requires more iterations to effectively explore. Since we limit the number of iterations, we essentially limit the algorithm to searching a fraction of the total solution space, which leads to smaller enhancements.

 figure: Fig. 6.

Fig. 6. Peak enhancement as a function of the number of SLM bins for the SGA and μGA with a stop condition of 12,500 iterations.

Download Full Size | PDF

5. Conclusions

We have developed a μGA for performing optimization of light through opaque media. The μGA operates based on similar principles to previously used GAs [4,27], but differs in the population size, use of elitism, crossover technique, and similarity-based regeneration in place of mutation. These differences lead to the μGA being significantly faster than both the SGA and IA. This speed enhancement is advantageous for applications in which optimization must occur quickly, such as biological imaging [19], authentication [29], and astronomical imaging [18]. For applications where speed is less important and large enhancements are desired the SGA is the best option.

To further enhance the speed of the μGA we are currently working on implementing multithreading into the μGA code in order to take advantage of modern computer’s multicore processors. The idea behind using multithreading in the μGA is to perform overhead calculations (crossover, random number generation, etc.) on one thread while a different thread controls the SLM and camera. While this speed improvement may be small for one iteration, it will compound over the use of thousands of iterations to provide significant time savings.

This work was supported by the Defense Threat Reduction Agency, Award # HDTRA1-13-1-0050 to Washington State University.

References and Note

1. I. Freund, “Looking through walls and around corners,” Physica A 168, 49–65 (1990). [CrossRef]  

2. I. Vellekoop and A. P. Mosk, “Focusing coherent light through opaque strongly scattering media,” Opt. Lett. 32, 2309–2311 (2007). [CrossRef]  

3. J. H. Park, C. Park, H. Yu, Y. H. Cho, and Y. Park, “Active spectral filtering through turbid media,” Opt. Lett. 37, 3261–3263 (2012). [CrossRef]  

4. Y. Guan, O. Katz, E. Small, J. Zhou, and Y. Silberberg, “Polarization control of multiply scattered light through random media by wavefront shaping,” Opt. Lett. 37, 4663–4665 (2012). [CrossRef]  

5. E. Small, O. Katz, Y. Guan, and Y. Silberberg, “Spectral control of broadband light through random media by wavefront shaping,” Opt. Lett. 37, 3429–3431 (2012). [CrossRef]  

6. J. H. Park, C. Park, H. Yu, Y. H. Cho, and Y. Park, “Dynamic active wave plate using random nanoparticles,” Opt. Express 20, 17010–17016 (2012). [CrossRef]  

7. H. P. Paudel, C. Stockbridge, J. Mertz, and T. Bifano, “Focusing polychromatic light through strongly scattering media,” Opt. Express 21, 17299–17308 (2013). [CrossRef]  

8. F. van Beijnum, E. G. van Putten, A. Lagendijk, and A. P. Mosk, “Frequency bandwidth of light focused through turbid media,” Opt. Lett. 36, 373–375 (2011). [CrossRef]  

9. I. M. Vellekoop, A. Lagendijk, and A. P. Mosk, “Exploiting disorder for perfect focusing,” Nat. Photonics 4, 320–322 (2010). [CrossRef]  

10. Y. M. Wang, B. Judkewitz, C. H. DiMarzio, and C. A. Yang, “Deep-tissue focal fluorescence imaging with digitally time-reversed ultrasound-encoded light,” Nat. Commun. 3, 928 (2012). [CrossRef]  

11. E. G. van Putten, D. Akbulut, J. Bertolotti, W. L. Vos, A. Lagendijk, and A. P. Mosk, “Scattering lens resolves sub-100 nm structures with visible light,” Phys. Rev. Lett. 106, 193905 (2011). [CrossRef]  

12. D. J. McCabe, A. Tajalli, D. R. Austin, P. Bondareff, I. A. Walmsley, S. Gigan, and B. Chatel, “Spatio-temporal focusing of an ultrafast pulse through a multiply scattering medium,” Nat. Commun. 2, 447 (2011). [CrossRef]  

13. O. Katz, E. Small, Y. Bromberg, and Y. Silberberg, “Focusing and compression of ultrashort pulses through scattering media,” Nat. Photonics 5, 372–377 (2011). [CrossRef]  

14. M. Leonetti and C. Lopez, “Active subnanometer spectral control of a random laser,” Appl. Phys. Lett. 102, 071105 (2013). [CrossRef]  

15. M. Leonetti, C. Conti, and C. Lopez, “Random laser tailored by directional stimulated emission,” Phys. Rev. A 85, 043841 (2012). [CrossRef]  

16. N. Bachelard, S. Gigan, X. Noblin, and P. Sebbah, “Adaptive pumping for spectral control of random lasers,” Nature 10, 426–431 (2014).

17. N. Bachelard, J. Andreasen, S. Gigan, and P. Sebbah, “Taming random lasers through active spatial control of the pump,” Phys. Rev. Lett. 109, 033903 (2012). [CrossRef]  

18. A. P. Mosk, A. Lagendijk, G. Lerosey, and M. Fink, “Controlling waves in space and time for imaging and focusing in complex media,” Nat. Photonics 6, 283–292 (2012). [CrossRef]  

19. C. Stockbridge, Y. Lu, J. Moore, S. Hoffman, R. Paxman, K. Toussaint, and T. Bifano, “Focusing through dynamic scattering media,” Opt. Express 20, 15086–15092 (2012). [CrossRef]  

20. P. Lai, L. Wang, J. W. Tay, and L. V. Wang, “Nonlinear photoacoustic wavefront shaping (paws) for single speckle-grain optical focusing in scattering media,” arXiv:1402.0816v1 (2014).

21. T. Chaigne, J. Gateau, O. Katz, C. Boccara, S. Gigan, and E. Bossy, “Improving photoacoustic-guided optical focusing in scattering media by spectrally filtered detection,” Opt. Lett. 39, 6054–6057 (2014). [CrossRef]  

22. J. W. Tay, J. Liang, and L. V. Wang, “Amplitude-masked photoacoustic wavefront shaping and application in flowmetry,” Opt. Lett. 39, 5499–5502 (2014). [CrossRef]  

23. S. Gigan and E. Bossy, “A photoacoustic transmission matrix for deep optical imaging,” in SPIE Newsroom (SPIE, 2014).

24. J. W. Tay, P. Lai, Y. Suzuki, and L. V. Wang, “Ultrasonically encoded wavefront shaping for focusing into random media,” Sci. Rep. 4, 3918 (2014). [CrossRef]  

25. P. Lai, J. W. Tay, L. Wang, and L. V. Wang, “Optical focusing in scattering media with photoacoustic wavefront shaping (paws),” Proc. SPIE 8943, 894318 (2014).

26. I. Vellekoop and A. Mosk, “Phase control algorithms for focusing light through turbid media,” Opt. Commun. 281, 3071–3080 (2008). [CrossRef]  

27. D. B. Conkey, A. N. Brown, A. M. Caravaca-Aguirre, and R. Piestun, “Genetic algorithm optimization for focusing through turbid media in noisy environments,” Opt. Express 20, 4840–4849 (2012). [CrossRef]  

28. H. Yilmaz, W. L. Vos, and A. P. Mosk, “Optimal control of light propagation through multiple-scattering media in the presence of noise,” Biomed. Opt. Express 4, 1759–1768 (2013). [CrossRef]  

29. B. R. Anderson, R. Gunawidjaja, and H. Eilers, “Effect of experimental parameters on optimal transmission of light through opaque media,” Phys. Rev. A 90, 053826 (2014). [CrossRef]  

30. K. Krishnakumar, “Micro-genetic algorithms for stationary and stationary non-stationary function optimization,” Proc. SPIE 1196, 1196 (1989).

31. P. K. Senecal, “Numerical optimization using the gen4 micro-genetic algorithm code” (University of Wisconsin-Madison, 2000).

32. J. Nocedal and S. Wright, Numerical Optimization, 2nd ed. (Springer, 2006).

33. E. Elbeltagi, T. Hegazy, and D. Grierson, “Comparison among five evolutionary-based optimization algorithms,” Adv. Eng. Inf. 19, 43–53 (2005).

34. E. Zitzler, K. Deb, L. Thiele, C. C. Coello, and D. Corne, eds., Evolutionary Multi-Criterion Optimization, Vol. 1993 of Lecture Notes in Computer Science (Springer, 2001).

35. T. Back, Evolutionary Algorithms in Theory and Practice (Oxford University, 1996).

36. D. Goldberg, Genetic Algorithms in Search, Optimization and Machine Learning (Addison-Wesley, 1989).

37. J. Holland, Adaptation in Natural and Artificial Systems (University of Michigan, 1975).

38. R. Haupt and S. Haupt, Practical Genetic Algorithms, 2nd ed. (Wiley, 2004).

39. B. L. Miller, B. L. Miller, D. E. Goldberg, and D. E. Goldberg, “Genetic algorithms, tournament selection, and the effects of noise,” Complex Syst. 9, 193–212 (1995).

40. M. V. Butz, K. Sastry, and D. E. Goldberg, “Tournament selection in xcs,” in Proceedings of the Fifth Genetic and Evolutionary Computation Conference (Gecco-2003) (Association for Computing Machinery, 2002).

41. J. E. Baker, “Reducing bias and inefficiency in the selection algorithm,” in Proceedings of the Second International Conference on Genetic Algorithms and Their Application (L. Erlbaum Associates, 1987), pp. 14–21.

42. I. Loshchilov, M. Schoenauer, and M. Sebag, “Not all parents are equal for mo-cma-es,” in Evolutionary Multi-Criterion Optimization (Springer Verlag, 2011), pp. 31–45.

43. K. Deb, A. Pratap, S. Agarwal, and T. Meyarivan, “A fast and elitist multiobjective genetic algorithm: Nsga-ii,” IEEE Trans. Evol. Comput. 6, 182–197 (2002).

44. R. Gunawidjaja, T. Myint, and H. Eilers, “Temperature-dependent phase changes in multicolored ErxYbyZr1−xyO2/Eu0.02Y1.98O3 core/shell nanoparticles,” J. Phys. Chem. C 117, 14427–14434 (2013). [CrossRef]  

45. In order to compare the three algorithms in Fig. 4 we plot the enhancements as a function of normalized iteration, such that the max number of iterations is set to one. For the IA the total number of iterations is 131,072 and for both GAs the total number of iterations is 12,000.

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (6)

Fig. 1.
Fig. 1. SEM images of ZrO 2 nanoparticles at magnifications of 20 000 × and 200000 × . The particles are found to be spherical with diameters ranging between 80 and 300 nm.
Fig. 2.
Fig. 2. Histogram of NP diameters measured from SEM measurements. The mean diameter is 195 ± 32 nm .
Fig. 3.
Fig. 3. Enhancement as a function of the number of iterations for the IA, SGA, and μGA, where the stop condition for the GAs is an enhancement of 55. The μGA is found to be the fastest algorithm, with the SGA only being slightly faster than the IA.
Fig. 4.
Fig. 4. Enhancement as a function of normalized iteration using a noisy laser, short persistence time sample, and 4096 bins. The IA only results in a small enhancement of 3.9, while the μGA obtains an enhancement of 66 and the SGA an enhancement of 138.5.
Fig. 5.
Fig. 5. Number of iterations required to reach an enhancement of η = 43 as a function of the total number of bins for the SGA and μGA. Bins with a side length greater than 16 px are not shown as they are found to be unable to optimize to the target enhancement within a reasonable time frame.
Fig. 6.
Fig. 6. Peak enhancement as a function of the number of SLM bins for the SGA and μGA with a stop condition of 12,500 iterations.

Equations (1)

Equations on this page are rendered with MathJax. Learn more.

P i = f i j W f j ,
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.