Expand this Topic clickable element to expand a topic
Skip to content
Optica Publishing Group

Active photonic sensing for super-resolved reading performance in simulated prosthetic vision

Open Access Open Access

Abstract

In this work, we study the enhancement of simulated prosthetic reading performance through “active photonic sensing” in normally sighted subjects. Three sensing paradigms were implemented: active sensing, in which the subject actively scanned the presented words using the computer mouse, with an option to control text size; passive scanning produced by software-initiated horizontal movements of words; and no scanning. Our findings reveal a 30% increase in word recognition rate with active scanning as compared to no or passive scanning and up to 14-fold increase with zooming. These results highlight the importance of a patient interactive interface and shed light on techniques that can greatly enhance prosthetic vision quality.

© 2019 Optical Society of America under the terms of the OSA Open Access Publishing Agreement

1. Background

The term “active sensing” describes the process of active acquisition of sensory information (tactile or visual) by moving the sensory organs across a scene in an attempt to increase the amount of information and resolution [1–4]. Numerous groups have demonstrated the importance of eye movements (saccades) on spatial visual performance [5–10]. Of main importance is the work of Roorda group [11] and the work by Rucci et al [12] demonstrating the enhancement of spatial resolution induced by the eyes scanning of the visual target as compared to the condition where the target is fixed to a specific position on the retina. The same concept is sometimes adopted for describing non-physiological engineering detection techniques using a moving sensor [13].

In the current research, we studied the application of the active sensing concept to retinal prostheses systems, which are currently used for vision restoration in some degenerative diseases of the outer retina, such as age-related macular degeneration (AMD) and Retinitis Pigmentosa (RP). In these diseases, the “image capturing” photoreceptor layer of the retina degenerates; however, the remaining neural cells (bipolar and ganglion cells) that process the visual signals and relay them to the brain are relatively well preserved [14–16]. Vision restoration in these cases can be achieved by bypassing the degenerated cells and electrically stimulating the inner retina through retinal electrode implants [17]. Recent clinical trials investigating the various types of retinal prostheses have proven the feasibility of eliciting useful vision in blind patients (e.g [18–22].). Notwithstanding the improvement in the quality of life in patients implanted with these devices, the obtained visual acuity is still limited (with visual acuity of 20/550-20/1200 in most patients [17]).

Of main importance to the current study are the substantial differences between the various prosthetic systems and the manner in which the visual scene is scanned by the patients. In some prosthetic vision systems (e.g. the subretinal microphotodiode camera of Retinal Implant AG, or the proposed PRIMA implant with the photovoltaic approach [17,23]), the visual scene is scanned by natural eye movements, thus preserving sensorimotor contingency, as is the case in natural vision. In contrast, in other systems (e.g. Second Sight Argus II), the visual scene, captured by a glasses-mounted camera, is transmitted to the electrode array by a coil and is not affected by eye movements. Thus in this type of devices, the visual scene is not scanned by the sensing organ (the eye) but rather by head movements [24], and therefore does not preserve the sensorimotor contingency. One of the aims of the current research is to study whether active scanning not performed by the eye, albeit the condition of sensory non-contingency, can still induce visual performance enhancement of simulated prosthetic vision.

Of another great interest to the current research, is investigating the use of optical or digital zoom for enhancing prosthetic vision performance as was recently reported by Second Sight group [25]. In this case, the increase in the obtained visual acuity is accompanied by a decrease in the visual field size (x16 magnification and a 16-fold decrease of the visual field size for an obtained acuity of 20/200). Thus, it is of great interest to characterize the trade-off between acuity and visual field size and to measure whether the overall performance of visual tasks, such as reading, are improved by using the zoom option.

To investigate the above-mentioned questions, we utilized computer based prosthetic vision simulations which are widely used as an important tool for evaluating prosthetic visual function (e.g [26,27]. These simulations largely rely on the description of the perceived prosthetic vision as reported by implanted patients, who describe a sensation or a perception of a phosphene which is described as sensation of light with a round shape, usually a blurred gray circle upon current injection [28–30]. In the current research we studied reading, which is an important daily-life activity, and is therefore widely studied by simulated prosthetic vision systems [28,30–33]. In order to reduce the effect of text comprehension, as it can introduce subject to subject variability, subjects were presented with single words rather than full sentences and reading performance was evaluated by measuring word recognition rate and reading speed rather than text comprehension. We developed a simulated prosthetic vision paradigm aimed at measuring prosthetic vision reading capabilities while implementing active sensing in the form of image scanning and zooming. In our system, subjects scanned the visual scene by actively controlling the location of the simulated phosphene windows by a hand-held computer mouse. In addition, subjects could optimize their reading by choosing the zoom factor which controlled the size of the displayed text.

2. Methods

2.1 Subjects

The study was approved by and conducted according to both the IRB Committee at the Edith Wolfson Medical Center, Holon, Israel and by the Bar-Ilan University Ethics Committee guidelines. All participants signed an informed consent form. Subjects (n = 6) were all student volunteers (23-34 years), with normal or corrected to normal vision, recruited by advertising.

2.2 Set-up and study design

The experimental setup consisted of a MATLAB based computer simulation software and a computer monitor (1920 x 1080 pixels, refresh rate of 60 Hz) located at a distance of 60 cm from the subject with the presented word covering a 15°x15° on the retina. To ensure that the contrast presented to the user is not subject to the monitor’s nonlinear voltage - intensity response, we performed gamma correction (γ = 1.7435) and utilized brightness levels ranging from 5(black) to 181(white) cd/m2. The experiments were performed in a dimly lit room with an illumination level of 0.5 cd/m2. This level of illumination was chosen to reduce the effect of room ambient light on the perceived contrast of the presented word and to better present the real situation

Subjects were presented with phosphenized English words randomly drawn from an online available database (www.randomwordgenerator.com). Subjects were instructed to recognize the presented words in the fastest and most accurate way. Each experiment began with a short training session in which the subjects practiced the scanning and zoom-in options of the mouse. The subject then performed the tasks at the various conditions (no scanning, passive scanning, active sensing (scanning and zoom) at a randomized order.

In the active sensing paradigm, the mouse was real-time sampled at 125 Hz in order to control the location of the phosphene window on the computer screen by a custom written MATLAB software.

2.3. Prosthetic vision simulation: algorithm

As a first step towards prosthetic vision simulation, we developed a real-time algorithm that executes the conversion of the bitmap image to a phosphenes image in a multi-step process. Firstly, a phosphenes matrix with the same size as the field of view (FOV) of 15°x15° degrees was created at a pre-set density (0.75 to 2.5 CPD) of Gaussian shaped phosphenes. This range of phosphene density was chosen to simulate the current available retinal implants (e.g. 70µm pixel pitch of the Alpha IMS implant, corresponds to 2.14 CPD). Secondly, a chosen Region of Interest-ROI (selected in real time by the subject using the left click of the mouse) was converted to a low resolution 8bit gray image. Towards this end, we applied mean filtering and then quantization through a multistep algorithm which relies on the division of the ROI into phosphenes sized blocks. The value of each pixel was then replaced by the average of the entire block. Next, the image was quantized into four gray levels, and multiplied by the pre-allocated phosphenes grid (Fig. 1(a) and 1(b)). The trigger to another iteration of the algorithm is a change in the location of the mouse, or the scanning via a mouse left click of the image by the subject. An example of the application of this prosthetic vision simulation algorithm for different phosphene densities is presented in Fig. 2(a) and 2(b).

 figure: Fig. 1

Fig. 1 Block diagram of the prosthetic vision simulation process. (a) Block diagram of the pixilation process. (b) The zoom implementation algorithm.

Download Full Size | PDF

 figure: Fig. 2

Fig. 2 Prosthetic vision simulation. (a)&(b) Phosphene concept demonstration. The word “sun” generated at two phosphene densities of 0.75 CPD (a) and 1.5 CPD (b). (c)&(d) Zoom effect demonstration for the word “sun” at CPD 0f 0.75 with zoom factor of 1 (c) and 2.5 (d). (e)&(f) Contrast effect demonstration for the word “sun” at CPD of 0.75 and contrasts of 25% (e) and 50% (f).

Download Full Size | PDF

2.4. Scanning effect investigation

Of main interest was the validation of our working hypothesis which states that active sensing, through subject-controlled scanning of the phosphenized image, will result in a higher word recognition rate improvement as compared to no scanning or to passive scanning, which is not controlled by the subject. Toward this end, we implemented a passive scanning paradigm produced by software-initiated horizontal movements of the phosphenes matrix over the presented word, and an active scanning paradigm, where the subject controlled the position of the phosphenes matrix over the presented word by a computer mouse. The mouse movement signal was further used to analyze the scanning path and the so-called hand-saccades (see data analysis section). To evaluate the effect of the various scanning paradigms on reading performance, ten words were presented to the subject at phosphene densities ranging from 0.75 to 2 CPDs in a pseudo - random order. Specifically, both the order of the presented word and the CPD were randomized before each trial for each subject. A comparison was then made between reading performance in sessions in which the scanning option was disabled to those in which the active scanning was enabled. To validate the better enhancement obtained through active scanning a comparison was also made between sessions in which active scanning was enabled to those in which passive scanning was employed. These investigations were performed for words presented at 94% contrast with a constant zoom factor set to 1.

2.5. Prosthetic vision simulation: zoom effect

A second feature of interest in our study of active sensing in prosthetic vision is active zooming and its effect on the subject’s reading performance. The subject was presented with the option to change the image size (zoom in or out), at a preset set of zoom values: 1,1.5,2 and 2.5. An example of zoom implementation is depicted in Fig. 2(c) and 2(d) in which the same phosphenized word is presented for zoom = 1 (Fig. 2(c)) and zoom = 2.5 (Fig. 2(d)).

2.6. Prosthetic vision simulation: contrast effect

Furthermore, we investigated the effect of image contrast on prosthetic reading performance. To calculate the contrast level of the text we used the Michelson’s definition of contrast as described in Eq. (1).

Contrast=IMaxIMinIMax+IMin
Where IMax and IMin are the maximal and minimal intensity in the image, respectively. To obtain a desired contrast, we set the text pixel gray level as constant at highest value (white), and modified the background gray level accordingly. We investigated three contrast levels: 94 (the highest possible contrast in the current setup), 50 and 25% (Fig. 2(e) and 2(f)). The highest possible contrast value (94%) can be inferred from the measured monitor luminance levels found to be 5 cd/cm2 (lowest) for black and 181 cd/cm2 (highest) for white, yielding a maximal contrast of about 94% using Michelson’s definition of contrast.

Contrast level effect was investigated with the zoom factor set to 1 while image contrast was set to either 94, 50 or 25%. It should be noted that since low contrast values resulted in a higher recognition threshold, the sessions were started at a phosphene density of 1CPD (which is one step higher than those of the previous sessions), to prevent subject fatigue and redundancy.

2.6. Hand saccades detection algorithm

Since in our paradigm, the scanning of the visual scene is performed by the manual scanning of the computer mouse, we termed the rapid movements over the scene “hand-saccades”. These were defined by setting a threshold on the time interval between two changes of direction (COD) in the scanning process (TCOD). If the time interval was shorter than the threshold (set to 1.5 seconds), then a saccade was detected. The following equation is a formulation of the condition

SaccadeBlock={ACOD(i)Saccade:TACOD(i)+TACOD(i+1)1.5sec}
Where ACOD is the amplitude in degrees of the change of direction in x/y direction and the index i runs over all consecutive amplitudes that meet the condition. The threshold was set to 1.5 seconds as the common mouse movement frequencies were found to be 0-2Hz.

Subsequently, the overall time per saccades block (TSaccadesBlock) is governed by the following equation:

TSaccadesBlock={TACOD(i)+TACOD(i+1)1.5sec}
Where the index i runs over all consecutive time intervals that meet the condition. Figure 3. depicts a characteristic motion track (x-axis) of a mouse along a word recognition trial in an active scanning paradigm. Red plus signs denote time point where a hand-saccade was detected.

 figure: Fig. 3

Fig. 3 Raw scanning data with the “hand-saccades clearly marked.

Download Full Size | PDF

2.7. Data analysis

Reading performance was evaluated by measuring word recognition rate and reading speed. The recognition rate was defined as the percentage of the correctly recognized words from the entire words repertoire (a total of 10 words), which was then averaged over all repetitions across the subjects and fitted to a logistic function with the threshold set to 80% recognition. The average reading speed of recognized words was defined as the reciprocal of the time duration needed for the subject to recognize the word.

In sessions in which the zoom option was enabled, the zoom used by the subject was continuously recorded and we analyzed the maximal used zoom factor averaged over the sessions within the same phosphene density. Only reading sessions in which the words where correctly recognized were included in the averaging process. The significance of the observed differences of the word recognition rate and speed between the different scanning paradigms were evaluated using standard paired t-test analysis.

Finally, we also analyzed the scanning path used by the subjects, as recorded from the computer mouse input, in terms of pattern, velocity and direction change probability. These so-called “hand saccades” were of specific interest as we hypothesize that they play a similar role to eye saccades and are therefore expected to aid in reading performance enhancement. We analyzed the following features: saccadic amplitudes and number of saccades which were analyzed separately for both vertical and horizontal directions.

3. Results

3.1. Average recognition rate and contrast effect

The first feature that can be observed from the data depicting the recognition rate as a function of phosphene density, is the characteristic sigmoidal function in all experimental paradigms [31] (Fig. 4), which is in agreement with other psychophysical tests performed in vision as well as in other sensing modalities (e.g [34–36].). At 94% contrast level, the recognition rate increased gradually with phosphene density, reaching a plateau at a phosphene density of 1.3 CPD, corresponding to density of 4.16 phosphenes per letter. Using the logistic function fit, an 80 percent calculated threshold was found at 1.16 CPD, corresponding to 3.71 phosphenes per letter, which is in agreement with [31].

 figure: Fig. 4

Fig. 4 Contrast effect on reading recognition rate. The recognition rate as a function of CPD for various contrast levels of 94% (red dots), 50% (blue dots), and 25% (black dots). All solid lines represent a sigmoidal fit. Error bars represent standard error.

Download Full Size | PDF

As expected, reducing the presented word contrast resulted in a right-shift of the word recognition rate curve (Fig. 4) with an 80 percent recognition at 1.3 and 2 CPD for 50% and 25% contrast levels, respectively. The recognition rate at 94% contrast was about 9-fold higher as compared with the 25% contrast for a phosphene density of 1.25 CPD and 3.6-fold for 1.5 CPD. (p<0.01 for all phosphene densities smaller than 1.75cpd).

As can be seen in Table 1, in order to achieve a recognition rate of 90% and higher, the number of phosphenes per letter should be at least 4 phosphenes per letter, in agreement with previous reports on prosthetic reading [33].

Tables Icon

Table 1. Phosphene density and phosphene per letter needed for various recognition rate

3.2. Active sensing effect

Active sensing by scanning the phosphenized presented word with a computer mouse (Fig. 5(a)) significantly increased (p = 0.007) the recognition rate by up to 34 percent at the linear range of the sigmoidal curve at the phosphene density range of 0.8 to 1 cpd. However, there was little to no effect at the low (<0.8 cpd, p = 0.073) or high (>1 cpd, p = 0.071) phosphene density range, where the recognition rate was either very low or saturated, respectively.

 figure: Fig. 5

Fig. 5 Active Sensing effect on reading recognition rate. a) Word recognition rate as a function of CPD for the various sensing paradigms namely: no scanning (black dots), Passive scanning (blue dots) and Active sensing (red dots). All solid lines represent a sigmoidal fit. b) Comparison of the recognitions rate, at the linear range of the psychometric curve, for the no scanning (black dots) and passive scanning (red dots) experimental paradigms. solid lines represent a linear fit. c) Comparison of the recognitions rate, at the linear range of the psychometric curve, for the passive scanning (blue dots) and active scanning (red dots). Solid lines represent a linear fit. d) Average word recognition rates for all phosphene densities for the various sensing paradigms. Error bars represent standard error.

Download Full Size | PDF

We therefore further investigated the effect of scanning at the linear range of the sigmoid (phosphene density range of 0.8-1 CPD) in more details by reducing the phosphene density step size to 0.05 (Fig. 5(b) and 5(c)). It can be seen that the mean recognition rate with active scanning was significantly higher than with no scanning (p<0.001) by up to 2.5-folds (Fig. 5(b) and 5(d)). Passive scanning was also associated with an increased recognition rate as compared with no scan, however, this increase was lower as compared with active scanning (p<0.005) (Fig. 5(c) and 5(d)).

3.3. Zoom effect on recognition rate and reading speed

We hypothesized that presenting the subject with the zoom and active control of text size option will increase the recognition rate at the linear phosphene density range of the sigmoid curve, similarly to the effect of scanning. Indeed, in the linear range of 0.75-1 CPD (corresponding to 2.4-3.2 phosphenes per letter, when no zoom is applied), active zooming resulted in an up to 4-fold increase in recognition rate (blue line) as compared to the no zoom option (red line) for phosphene densities lower than 1.25 (Fig. 6(a)). In contrast, as expected, at phosphene densities of 1.25 CPD and higher, where the recognition task can be easily performed, there is no contribution from the zoom option. At the low phosphene densities range (both per letter or per degree), subjects opted to use an average zoom factor of 1.7 while for higher phosphene density levels, they opted for lower values reaching a zoom factor of 1 (no zoom) starting at a phosphene density of 1.75 CPD (Fig. 6(b)).

 figure: Fig. 6

Fig. 6 Zoom factor effect on reading recognition rate. a) Recognition rate as a function of phosphene density for the different experimental paradigms, No zoom (red dots) and enabled zoom (blue dots). Solid lines represent a sigmoidal fit. b) Average employed zoom value as a function of phosphene density (blue dots). Solid line represents a fit to and an exponential decay. c) Phosphenes per letter density as a function of CPDs (blue dots), solid line represents a linear fit. d) Reading speed (words per minute) as a function of phosphene density for the different experimental paradigms, No zoom (red dots) and enabled zoom (blue dots). Solid lines represent a linear fit. Error bars represent standard error.

Download Full Size | PDF

Interestingly, the zoom factor selected by subjects scaled with the inverse of the square of phosphene density and could be fitted to the following exponential function:

Equation (4):

Zoom=1+4.3e2CPD
Further analysis revealed that along the entire range of CPDs, the phosphene per letter density chosen by the subjects by the selected zoom demonstrated a mild linear increase from 4.5 for low phosphene densities to about 6.5 for higher phosphene densities (Fig. 6(c)).

Increasing the phosphene density resulted an increase in the reading speed Fig. 6(d)) in line with our expectations that the higher the implant’s resolution is, the faster the subjects will recognize objects in the input image. Interestingly, notwithstanding the increase in recognition rate at lower phosphene densities (observed in Fig. 6(a)), enabling the zoom option did not significantly affect reading speed.

3.4. Scanning path

The final parameter we investigated is the mouse-controlled scanning path used by the subject, which has never been studied in the context of prosthetic vision, to the best of our knowledge. Figure 7 displays the scanning path employed by the same subject over the word “sun” displayed at different CPDs.

 figure: Fig. 7

Fig. 7 Scanning paths illustrations. Scanning path overlaid on the phosphenized word (Sun) for the same subject at various CPDs.

Download Full Size | PDF

As described in the methods section, using a customized algorithm we detected direction changes in the scanning path and termed them “hand-saccades” and measured the time dedicated to these scans, scanning velocity and amplitude. As can be observed in Fig. 8(a), increasing the phosphene density was associated with a linear decrease in the total saccades in the horizontal (x-axis) direction; while in the vertical (y-axis) direction, the saccadic activity was very small. Similarly, the normalized number of hand-saccades (Fig. 8(b)), average saccadic scanning velocity (Fig. 8(c)) and average amplitude (Fig. 8(d)) all decreased with increasing phosphene density. Interestingly, scanning speed in the horizontal (“x”) direction was significantly higher as compared with the vertical direction (“y”) (up to 3 times) (Fig. 8(c), mainly at lower phosphene densities, again suggesting that scanning is performed mainly in the horizontal direction.

 figure: Fig. 8

Fig. 8 Scanning Path Features Analysis. (a) Hand saccades time as a function of phosphene density for the X direction (red dots) and Y direction (black dots). Solid line represents a linear fit. (b) Number of direction changes (or number of hand saccades) as a function of phosphene density for the X direction, solid line represents a linear fit. (c) Scanning velocity as a function of phosphene density for the X direction (red dots) and Y direction (black dots). Solid lines represent a linear fit. (d) Scanning amplitude as a function of phosphene density, solid line represents a power decay fit. Error bars represent standard error.

Download Full Size | PDF

3.5. Discussion

Active sensing, performed either under sensorimotor contingency or without, has been studied as an important tool for enhancing visual function performance (among other senses). Here we present a thorough investigation of the effect of active sensing on reading performance in simulated prosthetic vision. The main valuable observation we make is that active sensing in the form of either scanning the visual scene or zooming, significantly increased the recognition rate of presented phosphenized words, mainly at the low phosphene density range.

The concept of active sensing has recently gained much interest in the field of sensory substitution, where vision is substituted by tactile or hearing senses [4,13,37]. Of main interest in this field is studying the differences between cases under sensorimotor contingency conditions, in which the scanning organ (actuator) is the same as the sensing organ (sensor) to cases under sensorimotor non-contingency conditions, in which the actuator is different from the sensing organ. Many studies have shown that under sensorimotor contingency conditions, active sensing outperforms the sensorimotor non-contingency conditions, probably due to the dependency on natural sensory motor loops [4,38–40]. Furthermore, it has been shown that sensorimotor contingency is important for the normal development of the visual system [41]. Notwithstanding the contribution of sensorimotor contingency, some studies have shown enhanced performance also under sensorimotor non-contingency [42] conditions or even in case of virtual movements [37]. Of major importance in this field, is the study by the Roorda group [11] who reported an enhancement of spatial acuity by up to 25 percent when the letter E, projected on the retina, moved regardless of whether motion was self-induced, externally stimulated or randomly picked from similar movement patterns (so called incongruent motion).

In our case, the active sensing is clearly performed under a sensorimotor non-contingency condition, as the hand muscles serve for moving the computer mouse for scanning the image, and the eyes serve as the sensor. Nevertheless, we found a significant increase in reading performance where the total word recognition rate was 2 times higher with scanning as compared with no scan, in line with Bach-y-Rita and others work [37,43]. It is worth noting that these findings are also in agreement with well characterized data in many reports that highlight the eye-hand coordination contribution to the visual scene encoding (e.g [44]). Another line of evidence for hand-eye motor coordination contribution are studies demonstrating that eye saccades are faster when accompanied with hand movements [45]. Thus, in the current research paradigm, although the hand served as the actuator and eye as a sensor, there could still be some sensory-motor contingency. It should be mentioned that in the current experimental paradigm, the natural eye-saccades of normally-sighted volunteers are not expected to significantly enhance the performance, as simulated prosthetic vision is mainly limited by the sampling density of the phosphenized letter rather than the visual acuity of the subjects.

Interestingly, word scanning was mainly performed in the horizontal (x-axis) direction as opposed to the vertical (y-axis) direction, probably because of the prominent vertical component of the presented English words.

A slight enhancement in reading performance was also observed for passive scanning, where subjects were introduced with a horizontally passive scanned image. This increase, however, was significantly smaller than with the active sensing, which is in line with the results obtained when passive or virtual movement of the detector were employed [13,43,46].

It should be noted that in the current study, the visual system was tested in a simulated prosthetic vision paradigm and not electrically simulated, as is the case in (both retinal and cortical) prosthetic vision. Interestingly, however, recent research in rats [47] showed that active sensing for object localization could be elicited by real-time whisker tracking system combined with electrical micro stimulation in the barrel cortex. These results suggest that information regarding the space can be gathered even in the case of electrically evoked stimulation (rather than normal sensation) and under a non-contingency sensorimotor condition.

While subjects with the Argus II are instructed to avoid scanning with the eyes [48,49] (because the visual information transferred to the implant is not updated with eye movements), they do use active scanning by head movements [24], which can be considered as a sensory motor non-contingency active sensing. It should be noted, however, that similarly to the above-mentioned eye-hand coordination, head movement is also part of the sensorimotor loop of vision, thus this case can represent a partial sensorimotor contingency. Accordingly, a series of papers demonstrated that blind patients implanted with the Argus II can map the visual scene based on the momentary gaze position [24,48,49].

In contrast, subjects implanted with the microphotodiode camera of the retinal implant AG or photovoltaic approach [23], where the stimulated prosthetic information is directly linked to eye movements (identical to natural vision), represent a case of full sensorimotor contingency. Indeed, a recent study demonstrated that these subjects exhibited classical fixation eye movement patterns, including ocular tremor, drift and micro-saccades only at the presence of a stimuli when the implant was ON [50].

Another important aspect of the current research is the use of zoom for enlarging the visual scene (words text in our case) at the expense of reducing the field of view (FOV) size. Our investigations reveal that high recognition rates, 90-100 percent, can be obtained with zoom even at very low phosphene densities, where recognition was rather very low when zoom is disabled (Fig. 6(a)). More importantly, the current study paradigm, where subjects chose the zoom factor, best suited for the word recognition task, enables the extraction of useful information regarding the optimal reading letter size in terms of phosphenes per letter (Fig. 6(b) and 6(c) and Eq. (4). Indeed, we can infer from the obtained data that our subjects chose to read at an average text size corresponding to 4.5-6.5 phosphenes per letter (Fig. 6(c)). This is in line with previous studies reporting a nearly perfect performance at 4 to 7.7 phosphenes/character [30–33]. In contrast to the increase in recognition rate, the lack of effect of zoom on the reading speed (Fig. 6(d)) is probably the result of the trade-off existing between the letter size and the number of phosphenes per letters and the field of view. A smaller field of view is associated with not only a longer time spent on scanning the entire word but also with the need to read each letter separately rather than the entire word thus leading to context loss. In addition to letter size, contrast was also found to have a great effect on reading performance in simulated prosthetic vision with a 9-fold increase in recognition rate observed when 94% and 25% contrasts are compared at 1.25 CPDs. It worth noting that the observed reading speeds in our study (2-25 words/min, depending on phosphenes density) are in agreement with previous studies investigating simulated prosthetic vision, such as Dagnelie et al 2006 [33], Chen at al 2009 [28,51]. Clinical data with patients implanted with Argus II are comparable [52] in some reports, or significantly lower and variable (6-221 sec for letter recognition), albeit at good recognition rate, in others [19,53].

Our results show that both zoom and contrast features, are important in the design considerations of a prosthetic device and the estimation of optimal configurations for achieving maximal recognition and visual accuracy by the implanted patients. Our results support previous reports [54–56] showing that image processing algorithms, for the correction of low contrast images or letter text, can be very beneficial in enhancing prosthetic visual performance where the visual scene is acquired by a camera enabling their application on the acquired images before the images are converted to an electrical stimulation pattern (e.g. Argus II, PRIMA). Recent investigations on strategies for enhancing the performance of the Argus II implant also highlighted the contribution of basic image processing in the form of contrast enhancement and zoom on the maximal obtained visual acuity (with values exceeding the theoretical limits) [25].

Finally, our study offers a comprehensive characterization of what we refer to as hand-saccades [57], which was defined as the path of the active scanning of phosphenized word performed by the subject using a computer mouse. Analyzing eye movement during reading can serve for studying various cognitive processing, as was demonstrated by Raney et al [58]. Data on saccades and fixation can aid in detecting patterns of readers engagement and performance [59] and to further categorize reading skills in children [60] and adults [61]. In the current study, we used words rather than sentences to measure the reading performance of our subject. Analyzing the hand-saccades during word recognition, we found specific characteristics associated with increased task complexity. More specifically, we found a clear negative correlation between the time spent on saccades, number of the saccades, saccades amplitude and velocity, to the phosphene density, suggesting that the more challenging the task is (lower phosphene density), the more scanning is performed (in all investigated aspects). Analysis of the scanning paths data demonstrate [57] that the scanning velocity is significantly higher in the horizontal as compared with the vertical direction highlighting the preference to horizontal text scanning. This is in line with evidence of the importance of the vertical structures in identification of letters (e.g [62,63]). Interestingly, analyzing the visual scene motion path, where subjects tracked a target in a simulated prosthetic vision paradigm, served as a means for evaluating visual performance [57]. In this study, however, the effect of scanning on visual performance was not evaluated.

4. Conclusion

This work represents a comprehensive study which sheds light on important aspects of sensing and sensorimotor contingency in the field of retinal prosthetics in general and on reading performance in particular. Active photonic sensing is a vital concept for better understanding and improving prosthetic vision, as well as vision research in general. Future work will focus on incorporating this sensing paradigm with a more realistic prosthetic vision simulation algorithm, which introduces variable and limited persistence of a visual percept to the process and by using an eye-tracker. Moreover, more work also remains to be done to translate these findings to sight restoration in patients implanted with retinal prostheses.

Funding

Israeli Science Foundation (#157-16) and Israeli Ministry of Science.

Acknowledgments

The authors would like to thank Dr. Yoram Bonneh for his insightful comments and discussion

Disclosures

The authors declare that there are no conflicts of interest related to this article.

References

1. J. M. Loomis, “Tactile letter recognition under different modes of stimulus presentation,” Percept. Psychophys. 16(2), 401–408 (1974). [CrossRef]  

2. E. Gamzu and E. Ahissar, “Importance of temporal cues for tactile spatial- frequency discrimination,” J. Neurosci. 21(18), 7416–7427 (2001). [CrossRef]   [PubMed]  

3. A. Saig, G. Gordon, E. Assa, A. Arieli, and E. Ahissar, “Motor-sensory confluence in tactile perception,” J. Neurosci. 32(40), 14022–14032 (2012). [CrossRef]   [PubMed]  

4. A. Zilbershtain-Kra, Y. Ahissar, and E. Arieli, “Speeded performance with active- sensing based vision- to- touch substitution,” FENS Abstract D (2016), p. Do36.

5. E. Ahissar and A. Arieli, “Seeing via miniature eye movements: a dynamic hypothesis for vision,” Front. Comput. Neurosci. 6, 89 (2012). [CrossRef]   [PubMed]  

6. D. W. Arathorn, S. B. Stevenson, Q. Yang, P. Tiruveedhula, and A. Roorda, “How the unstable eye sees a stable and moving world,” J. Vis. 13(10), 22 (2013). [CrossRef]   [PubMed]  

7. M. Rolfs, R. Kliegl, and R. Engbert, “Toward a model of microsaccade generation: the case of microsaccadic inhibition,” J. Vis. 8(11), 5 (2008). [CrossRef]   [PubMed]  

8. X. Troncoso, J. Otero-Millan, S. Macknik, I. Serrano-Pedraza, and S. Martinez-Conde, “Saccades and microsaccades during visual fixation, exploration, and search: Foundations for a common saccadic generator,” J. Vis. 9(8), 447 (2010). [CrossRef]   [PubMed]  

9. M. Rolfs, “Microsaccades: small steps on a long way,” Vision Res. 49(20), 2415–2441 (2009). [CrossRef]   [PubMed]  

10. H. B. Barlow, “Eye movements during fixation,” J. Physiol. 116(3), 290–306 (1952). [CrossRef]   [PubMed]  

11. K. Ratnam, N. Domdei, W. M. Harmening, and A. Roorda, “Benefits of retinal image motion at the limits of spatial vision,” J. Vis. 17(1), 30 (2017). [CrossRef]   [PubMed]  

12. M. Rucci, R. Iovin, M. Poletti, and F. Santini, “Miniature eye movements enhance fine spatial detail,” Nature 447(7146), 852–854 (2007). [CrossRef]   [PubMed]  

13. B. Hsu, C. H. Hsieh, S. N. Yu, E. Ahissar, A. Arieli, and Y. Zilbershtain-Kra, “A tactile vision substitution system for the study of active sensing,” IEEE Eng. Med. Biol. Soc. IEEE Eng. Med. Biol. Soc. Annu. Conf. (2013). [CrossRef]  

14. S. Y. Kim, S. Sadda, J. Pearlman, M. S. Humayun, E. de Juan Jr., B. M. Melia, and W. R. Green, “Morphometric analysis of the macula in eyes with disciform age-related macular degeneration,” Retina 22(4), 471–477 (2002). [CrossRef]   [PubMed]  

15. J. L. Stone, W. E. Barlow, M. S. Humayun, E. de Juan Jr., and A. H. Milam, “Morphometric analysis of macular photoreceptors and ganglion cells in retinas with retinitis pigmentosa,” Arch. Ophthalmol. 110(11), 1634–1639 (1992). [CrossRef]   [PubMed]  

16. F. Mazzoni, E. Novelli, and E. Strettoi, “Retinal ganglion cells survive and maintain normal dendritic morphology in a mouse model of inherited photoreceptor degeneration,” J. Neurosci. 28(52), 14282–14292 (2008). [CrossRef]   [PubMed]  

17. G. A. Goetz and D. V. Palanker, “Electronic approaches to restoration of sight,” Rep. Prog. Phys. 79(9), 096701 (2016). [CrossRef]   [PubMed]  

18. L. da Cruz, J. D. Dorn, M. S. Humayun, G. Dagnelie, J. Handa, P. O. Barale, J. A. Sahel, P. E. Stanga, F. Hafezi, A. B. Safran, J. Salzmann, A. Santos, D. Birch, R. Spencer, A. V. Cideciyan, E. de Juan, J. L. Duncan, D. Eliott, A. Fawzi, L. C. Olmos de Koo, A. C. Ho, G. Brown, J. Haller, C. Regillo, L. V. Del Priore, A. Arditi, R. J. Greenberg, and Argus II Study Group, “Five-Year Safety and Performance Results from the Argus II Retinal Prosthesis System Clinical Trial,” Ophthalmology 123(10), 2248–2254 (2016). [CrossRef]   [PubMed]  

19. L. da Cruz, B. F. Coley, J. Dorn, F. Merlini, E. Filley, P. Christopher, F. K. Chen, V. Wuyyuru, J. Sahel, P. Stanga, M. Humayun, R. J. Greenberg, G. Dagnelie, and A. I. S. Argus II Study Group “The Argus II epiretinal prosthesis system allows letter and word reading and long-term function in patients with profound vision loss,” Br. J. Ophthalmol.97(5), 632 (2013).

20. A. C. Ho, M. S. Humayun, J. D. Dorn, L. da Cruz, G. Dagnelie, J. Handa, P.-O. Barale, J.-A. Sahel, P. E. Stanga, F. Hafezi, A. B. Safran, J. Salzmann, A. Santos, D. Birch, R. Spencer, A. V. Cideciyan, E. de Juan, J. L. Duncan, D. Eliott, A. Fawzi, L. C. Olmos de Koo, G. C. Brown, J. A. Haller, C. D. Regillo, L. V. Del Priore, A. Arditi, D. R. Geruschat, R. J. Greenberg, and Argus II Study Group, “Long-Term Results from an Epiretinal Prosthesis to Restore Sight to the Blind,” Ophthalmology 122(8), 1547–1554 (2015). [CrossRef]   [PubMed]  

21. K. Stingl, K. U. Bartz-Schmidt, D. Besch, C. K. Chee, C. L. Cottriall, F. Gekeler, M. Groppe, T. L. Jackson, R. E. MacLaren, A. Koitschev, A. Kusnyerik, J. Neffendorf, J. Nemeth, M. A. N. Naeem, T. Peters, J. D. Ramsden, H. Sachs, A. Simpson, M. S. Singh, B. Wilhelm, D. Wong, and E. Zrenner, “Subretinal Visual Implant Alpha IMS--Clinical trial interim report,” Vision Res. 111(Pt B), 149–160 (2015). [CrossRef]   [PubMed]  

22. E. Zrenner, K. U. Bartz-Schmidt, H. Benav, D. Besch, A. Bruckmann, V.-P. Gabel, F. Gekeler, U. Greppmaier, A. Harscher, S. Kibbel, J. Koch, A. Kusnyerik, T. Peters, K. Stingl, H. Sachs, A. Stett, P. Szurman, B. Wilhelm, and R. Wilke, “Subretinal electronic chips allow blind patients to read letters and combine them to words,” Proc. Biol. Sci. 278(1711), 1489–1497 (2011). [CrossRef]   [PubMed]  

23. H. Lorach, G. Goetz, R. Smith, X. Lei, Y. Mandel, T. Kamins, K. Mathieson, P. Huie, J. Harris, A. Sher, and D. Palanker, “Photovoltaic restoration of sight with high visual acuity,” Nat. Med. 21(5), 476–482 (2015). [PubMed]  

24. A. Caspi, P. E. Rosendall, J. W. Harper, M. P. Barry, K. D. Katyal, G. Dagnelie, and A. Roy, “Combined eye-head vs. head-only scanning in a blind patient implanted with the Argus II retinal prosthesis,” in 2017 8th International IEEE/EMBS Conference on Neural Engineering (NER) (2017), pp. 29–32. [CrossRef]  

25. J. Sahel, S. Mohand-Said, P. Stanga, A. Caspi, and R. Greenberg, “AcuboostTM: Enhancing the maximum acuity of the Argus II Retinal Prosthesis System,” Invest. Ophthalmol. Vis. Sci. 54(15), 1389 (2013).

26. K. Cha, K. Horch, and R. A. Normann, “Simulation of a phosphene-based visual field: visual acuity in a pixelized vision system,” Ann. Biomed. Eng. 20(4), 439–449 (1992). [CrossRef]   [PubMed]  

27. S. C. Chen, L. E. Hallum, N. H. Lovell, and G. J. Suaning, “Learning prosthetic vision: a virtual-reality study,” IEEE Trans. Neural Syst. Rehabil. Eng. 13(3), 249–255 (2005). [CrossRef]   [PubMed]  

28. S. C. Chen, G. J. Suaning, J. W. Morley, and N. H. Lovell, “Simulating prosthetic vision: I. Visual models of phosphenes,” Vision Res. 49(12), 1493–1506 (2009). [CrossRef]   [PubMed]  

29. A. P. Fornos, J. Sommerhalder, B. Rappaz, A. B. Safran, and M. Pelizzone, “Simulation of artificial vision, III: do the spatial or temporal characteristics of stimulus pixelization really matter?” Invest. Ophthalmol. Vis. Sci. 46(10), 3906–3912 (2005). [CrossRef]   [PubMed]  

30. L. Fu, S. Cai, H. Zhang, G. Hu, and X. Zhang, “Psychophysics of reading with a limited number of pixels: towards the rehabilitation of reading ability with visual prosthesis,” Vision Res. 46(8-9), 1292–1301 (2006). [CrossRef]   [PubMed]  

31. A. P. Fornos, J. Sommerhalder, and M. Pelizzone, “Reading with a simulated 60-channel implant,” Front. Neurosci. 5, 57 (2011). [PubMed]  

32. J. Sommerhalder, E. Oueghlani, M. Bagnoud, U. Leonards, A. B. Safran, and M. Pelizzone, “Simulation of artificial vision: I. Eccentric reading of isolated words, and perceptual learning,” Vision Res. 43(3), 269–283 (2003). [CrossRef]   [PubMed]  

33. G. Dagnelie, D. Barnett, M. S. Humayun, and R. W. Thompson Jr., “Paragraph text reading using a pixelized prosthetic vision simulator: parameter dependence and task learning in free-viewing conditions,” Invest. Ophthalmol. Vis. Sci. 47(3), 1241–1250 (2006). [CrossRef]   [PubMed]  

34. F. A. Wichmann and N. J. Hill, “The psychometric function: I. Fitting, sampling, and goodness of fit,” Percept. Psychophys. 63(8), 1293–1313 (2001). [CrossRef]   [PubMed]  

35. G. B. Wetherill and H. Levitt, “Sequential Estimation of Points on a Pshchometric Function,” Br. J. Math. Stat. Psychol. 18(1), 1–10 (1965). [CrossRef]   [PubMed]  

36. J. Nachmias, “On the psychometric function for contrast detection,” Vision Res. 21(2), 215–223 (1981). [CrossRef]   [PubMed]  

37. P. Bach-y-Rita and S. W Kercel, “Sensory substitution and the human-machine interface,” Trends Cogn. Sci. (Regul. Ed.) 7(12), 541–546 (2003). [CrossRef]   [PubMed]  

38. Y. Visell, “Tactile sensory substitution: Models for enaction in HCI,” Interact. Comput. 21(1–2), 38–53 (2009). [CrossRef]  

39. J. K. O’Regan and A. Noë, “A sensorimotor account of vision and visual consciousness,” Behav. Brain Sci. 24(5), 939–973 (2001). [CrossRef]   [PubMed]  

40. J. S. Chan, T. Maucher, J. Schemmel, D. Kilroy, F. N. Newell, and K. Meier, “The virtual haptic display: a device for exploring 2-D virtual shapes in the tactile modality,” Behav. Res. Methods 39(4), 802–810 (2007). [CrossRef]   [PubMed]  

41. R. Held and A. Hein, “Movement-produced stimulation in the development of visually guided behavior,” J. Comp. Physiol. Psychol. 56(5), 872–876 (1963). [CrossRef]   [PubMed]  

42. T. Pietrzak, A. Crossan, S. A. Brewster, B. Martin, and I. Pecci, “Exploring geometric shapes with touch,” in Lecture Notes in Computer Science (Including Subseries Lecture Notes in Artificial Intelligence and Lecture Notes in Bioinformatics) (2009), 5726 LNCS(PART 1), pp. 145–148.

43. P. Bach-y-Rita, “Tactile sensory substitution studies,” Ann. N. Y. Acad. Sci. 1013(1), 83–91 (2006). [CrossRef]   [PubMed]  

44. R. S. Johansson, G. Westling, A. Bäckström, and J. R. Flanagan, “Eye-hand coordination in object manipulation,” J. Neurosci. 21(17), 6917–6932 (2001). [PubMed]  

45. L. H. Snyder, J. L. Calton, A. R. Dickinson, and B. M. Lawrence, “Eye-hand coordination: saccades are faster when accompanied by a coordinated arm movement,” J. Neurophysiol. 87(5), 2279–2286 (2002). [CrossRef]   [PubMed]  

46. C. Lenay, O. Gapenne, S. Hanneton, C. Marque, and C. Genouëlle, “Chapter 16. Sensory substitution,” in (2003), pp. 275–292.

47. S. Venkatraman and J. M. Carmena, “Active sensing of target location encoded by cortical microstimulation,” IEEE Trans. Neural Syst. Rehabil. Eng. 19(3), 317–324 (2011). [CrossRef]   [PubMed]  

48. A. Caspi, A. Roy, J. D. Dorn, and R. J. Greenberg, “Retinotopic to spatiotopic mapping in blind patients implanted with the argus II retinal prosthesis,” Invest. Ophthalmol. Vis. Sci. 58(1), 119–127 (2017). [CrossRef]   [PubMed]  

49. N. Sabbah, C. N. Authié, N. Sanda, S. Mohand-Said, J. A. Sahel, and A. B. Safran, “Importance of eye position on spatial localization in blind subjects wearing an Argus II retinal prosthesis,” Invest. Ophthalmol. Vis. Sci. 55(12), 8259–8266 (2014). [CrossRef]   [PubMed]  

50. Z. M. Hafed, K. Stingl, K. U. Bartz-Schmidt, F. Gekeler, and E. Zrenner, “Oculomotor behavior of blind patients seeing with a subretinal visual implant,” Vision Res. 118, 119–131 (2016). [CrossRef]   [PubMed]  

51. S. C. Chen, G. J. Suaning, J. W. Morley, and N. H. Lovell, “Simulating prosthetic vision: II. Measuring functional capacity,” Vision Res. 49(19), 2329–2343 (2009). [CrossRef]   [PubMed]  

52. J. Sahel, L. da Cruz, F. Hafezi, P. E. Stanga, F. Merlini, B. Coley, R. J. Greenberg, and A. I. S. Group, “Subjects Blind From Outer Retinal Dystrophies Are Able To Consistently Read Short Sentences Using The ArgusTM Ii Retinal Prosthesis System,” in Investigative Ophthalmology & Visual Science ([Association for Research in Vision and Ophthalmology, etc.], 2011), 52(14), pp. 3420–3420.

53. H. C. Stronks and G. Dagnelie, “The functional performance of the Argus II retinal prosthesis,” Expert Rev. Med. Devices 11(1), 23–30 (2014). [CrossRef]   [PubMed]  

54. L. E. Hallum, S. L. Cloherty, and N. H. Lovell, “Image analysis for microelectronic retinal prosthesis,” IEEE Trans. Biomed. Eng. 55(1), 344–346 (2008). [CrossRef]   [PubMed]  

55. N. Parikh, L. Itti, and J. Weiland, “Saliency-based image processing for retinal prostheses,” J. Neural Eng. 7(1), 16006 (2010). [CrossRef]   [PubMed]  

56. N. Parikh, L. Itti, M. Humayun, and J. Weiland, “Performance of visually guided tasks using simulated prosthetic vision and saliency-based cues,” J. Neural Eng. 10(2), 026017 (2013). [CrossRef]   [PubMed]  

57. L. E. Hallum, G. J. Suaning, D. S. Taubman, and N. H. Lovell, “Simulated prosthetic visual fixation, saccade, and smooth pursuit,” Vision Res. 45(6), 775–788 (2005). [CrossRef]   [PubMed]  

58. G. E. Raney, S. J. Campbell, and J. C. Bovee, “Using eye movements to evaluate the cognitive processes involved in text comprehension,” J. Vis. Exp. (83): e50780 (2014). [CrossRef]   [PubMed]  

59. B. Maddox, A. P. Bayliss, P. Fleming, P. E. Engelhardt, S. G. Edwards, and F. Borgonovi, “Observing response processes with eye tracking in international large-scale assessments: evidence from the OECD PIAAC assessment,” Eur. J. Psychol. Educ. 33(3), 543–558 (2018). [CrossRef]  

60. K. Krstić, A. Šoškić, V. Ković, and K. Holmqvist, “All good readers are the same, but every low-skilled reader is different: an eye-tracking study using PISA data,” Eur. J. Psychol. Educ. 33(3), 521–541 (2018). [CrossRef]  

61. M. Krieber, K. D. Bartl-Pokorny, F. B. Pokorny, C. Einspieler, A. Langmann, C. Körner, T. Falck-Ytter, and P. B. Marschik, “The Relation between Reading Skills and Eye Movement Patterns in Adolescent Readers: Evidence from a Regular Orthography,” PLoS One 11(1), e0145934 (2016). [CrossRef]   [PubMed]  

62. J. S. Wolffsohn, G. Bhogal, and S. Shah, “Effect of uncorrected astigmatism on vision,” J. Cataract Refract. Surg. 37(3), 454–460 (2011). [CrossRef]   [PubMed]  

63. J. Wills, R. Gillett, E. Eastwell, R. Abraham, K. Coffey, A. Webber, and J. Wood, “Effect of simulated astigmatic refractive error on reading performance in the young,” Optom. Vis. Sci. 89(3), 271–276 (2012). [CrossRef]   [PubMed]  

Cited By

Optica participates in Crossref's Cited-By Linking service. Citing articles from Optica Publishing Group journals and other participating publishers are listed here.

Alert me when this article is cited.


Figures (8)

Fig. 1
Fig. 1 Block diagram of the prosthetic vision simulation process. (a) Block diagram of the pixilation process. (b) The zoom implementation algorithm.
Fig. 2
Fig. 2 Prosthetic vision simulation. (a)&(b) Phosphene concept demonstration. The word “sun” generated at two phosphene densities of 0.75 CPD (a) and 1.5 CPD (b). (c)&(d) Zoom effect demonstration for the word “sun” at CPD 0f 0.75 with zoom factor of 1 (c) and 2.5 (d). (e)&(f) Contrast effect demonstration for the word “sun” at CPD of 0.75 and contrasts of 25% (e) and 50% (f).
Fig. 3
Fig. 3 Raw scanning data with the “hand-saccades clearly marked.
Fig. 4
Fig. 4 Contrast effect on reading recognition rate. The recognition rate as a function of CPD for various contrast levels of 94% (red dots), 50% (blue dots), and 25% (black dots). All solid lines represent a sigmoidal fit. Error bars represent standard error.
Fig. 5
Fig. 5 Active Sensing effect on reading recognition rate. a) Word recognition rate as a function of CPD for the various sensing paradigms namely: no scanning (black dots), Passive scanning (blue dots) and Active sensing (red dots). All solid lines represent a sigmoidal fit. b) Comparison of the recognitions rate, at the linear range of the psychometric curve, for the no scanning (black dots) and passive scanning (red dots) experimental paradigms. solid lines represent a linear fit. c) Comparison of the recognitions rate, at the linear range of the psychometric curve, for the passive scanning (blue dots) and active scanning (red dots). Solid lines represent a linear fit. d) Average word recognition rates for all phosphene densities for the various sensing paradigms. Error bars represent standard error.
Fig. 6
Fig. 6 Zoom factor effect on reading recognition rate. a) Recognition rate as a function of phosphene density for the different experimental paradigms, No zoom (red dots) and enabled zoom (blue dots). Solid lines represent a sigmoidal fit. b) Average employed zoom value as a function of phosphene density (blue dots). Solid line represents a fit to and an exponential decay. c) Phosphenes per letter density as a function of CPDs (blue dots), solid line represents a linear fit. d) Reading speed (words per minute) as a function of phosphene density for the different experimental paradigms, No zoom (red dots) and enabled zoom (blue dots). Solid lines represent a linear fit. Error bars represent standard error.
Fig. 7
Fig. 7 Scanning paths illustrations. Scanning path overlaid on the phosphenized word (Sun) for the same subject at various CPDs.
Fig. 8
Fig. 8 Scanning Path Features Analysis. (a) Hand saccades time as a function of phosphene density for the X direction (red dots) and Y direction (black dots). Solid line represents a linear fit. (b) Number of direction changes (or number of hand saccades) as a function of phosphene density for the X direction, solid line represents a linear fit. (c) Scanning velocity as a function of phosphene density for the X direction (red dots) and Y direction (black dots). Solid lines represent a linear fit. (d) Scanning amplitude as a function of phosphene density, solid line represents a power decay fit. Error bars represent standard error.

Tables (1)

Tables Icon

Table 1 Phosphene density and phosphene per letter needed for various recognition rate

Equations (4)

Equations on this page are rendered with MathJax. Learn more.

Contrast= I Max I Min I Max + I Min
SaccadeBlock={ A COD (i)Saccade: T A COD (i) + T A COD (i+1) 1.5sec }
T SaccadesBlock ={ T A COD (i) + T A COD (i+1) 1.5sec }
Zoom=1+4.3 e 2CPD
Select as filters


Select Topics Cancel
© Copyright 2024 | Optica Publishing Group. All rights reserved, including rights for text and data mining and training of artificial technologies or similar technologies.