Avian Visual Cognition


Hierarchical Stimulus Processing by Pigeons 
Robert G. Cook
Department of Psychology, Tufts University

avcrule.gif (935 bytes)

 

 

 

 

 

 

Understanding how visual stimuli are perceived, discriminated, recognized, and ultimately come to control behavior is one of the central issues in  animal cognition. This chapter reviews recent experiments from my laboratory looking at how pigeons process hierarchically-arranged information presented at different spatial scales. Results are presented from three different paradigms that tested a variety of texture stimuli, hierarchical figural stimuli, and dynamic object-like stimuli. The results suggest that pigeons can switch between the processing of local and global information depending on the situation and stimuli tested. It is suggested that factors such as the organization of the avian visual system, attention, stimulus organization, and motion are critical to determining which of these hierarchical levels will come to control behavior.

avcrule.gif (935 bytes)

I. Introduction

What do birds visually experience as they fly overhead (the proverbial bird’s eye view), forage for food on the ground, or look for a suitable mate? Understanding how such important stimuli are perceived, discriminated, recognized, and ultimately come to control behavior is one of the central issues in animal cognition. The research described in this chapter reflects my interest in understanding the mechanisms of visual perception, cognition, and action in birds, and most specifically in the pigeon. Vision's unmistakable importance to these creatures is reflected in the substantial proportion of their compact brains devoted to visual processing and the huge relative size of their eyes. Besides better understanding what they see, we are just as keenly interested in the psychological and physiological mechanisms that generate these internal experiences. Thus, one of our overarching goals has been to understand the perceptual and cognitive processes involved from the point at which visual information first impinges upon the retina to the final behavioral reaction to this information. By carefully examining the behavior of animals in various types of visual discriminations, we can uncover much about these internal cognitive processes and how they work.

Consider for a moment, what happens when you open your eyes. You are immediately greeted with a world full of objects making up a larger visual scene. Some of these objects clearly have component parts, and each has many different features. All of them, however, are easily recognized, highly stable, and appropriately located within the scene. Furthermore, you can easily switch your attention between and among the objects and their features, first looking at the global properties of the scene and then to zooming in to closely examine its local details in the next moment. The big problem is that none of this coherent perceptual structure is directly available in the stimulation striking your retina (Kohler, 1947). Your retina only reports the relative and ever shifting amount of light at different points on its two dimensional surface. This information directly tells you very little about which point of light should go with another, which is part of an object, which is part of the background, and over what spatial scale to compute these different relations. As a result, the subsequent “reality” of surfaces, edges, parts, objects, and their stable position in the scene are all derived perceptual constructions of the brain. Some idea of the difficulty of solving these problems can be seen in the slow progress made in making mobile computers that can see and rapidly act within such an object-filled world. Presumably because of the computational demands required to solve the ambiguities present within the retinal stimulus, a quite sizable portion of the human brain is given over to processing visual information.

Of course, other visually-dominant animals face these same visual problems whenever they move about the world as well. Birds are particularly interesting in this regard. Because they fly, they need very accurate descriptions of the three-dimensional (3D) visual world around them. Yet, these creatures do so with brains that are quite small on an absolute scale. In the case of the pigeon, their brain is about 1/1000th the size of our own (Husband & Shimizu, 2001). Despite this, they seem to just as effortlessly solve the same perceptual ambiguities as our much larger brains. It is their extraordinary mixture of visual competence and small brain size that makes the psychological study of birds a challenging and important addition to our general understanding of the mechanisms of visual cognition. 

Given such considerations, a quite natural question to ask is whether birds may experience and act upon an object-filled world much like our own. Birds in general certainly seem to behave as if they do, grasping food, avoiding obstacles, courting potential mates, and so on. However, there is no easy way to answer such a seemingly simple question, except through experiments. Because we can never directly share their experience, the only way to answer this question is to collect behavioral evidence that makes an object-based view the most compelling explanation for our different experimental observations. Over the last decade, my lab has used a number of different approaches in contributing to answering this question. Our comparative strategy has been to examine whether the cognitive mechanisms of visual perception and action in pigeons operate in the same way as established for humans or function differently. This chapter reviews some of this work as its specifically pertains to the question of how pigeons process the local and global information from different types of stimuli.

The degree to which animals perceive and are controlled by the higher-order, global, or object-like properties of stimuli in comparison to more local, featural, or part-based descriptions has been an issue of concern in animal cognition for some time, although mainly in the spatial domain (e.g., Spetch & Edwards, 1988; Cook & Tauro, 1999). The long-standing associative tradition within animal learning has tended in the past to emphasize that analytical control emanates from specific features, encouraging the use of simple stimuli (bells, lights, buzzers, etc) in studying these relations. This emphasis can be seen in the numerous feature-based models that have been developed to explain animal discrimination learning (see Huber, 2001). Similar notions have tended to dominate ethological thinking as well, with an important emphasis on simple featural sign stimuli as key determinants of behavior. This same feature-based analysis has also been extended to more complex stimuli, such as pictures. In this vein, the now classic result reviewed by Cerella (1986) is an excellent example (see also D’Amato & Van Sant, 1988; Edwards & Honig, 1987). In this case, he asked pigeons to discriminate among different “Peanuts” cartoon characters. By scrambling the cartoon’s parts, he found that this behavior was mediated by the specific features of the drawings, rather than the overall relations between its component parts. This suggested that local details, rather than the big picture, were more important in how these animals examine the world. Recently, however, solid evidence has emerged that the global relations among component parts can also be critical in discriminative control. For example, Wasserman, Kirkpatrick-Steger, Van Hamme, and Biederman, (1993) found that scrambling the component parts of complex objects made from geons reduced their discrimination, indicative of partial control by the spatial configuration of the components (see Kirkpatrick (2001) for more details).

These contrasting results suggest that birds can perceive and discriminate complex stimuli based on either the local parts or the global configuration, much like humans. However, how and why these different types of stimulus control develop is still poorly understood. The answer lies in part in better understanding how animals perceive the local and global structure of stimuli, how they switch processing between these different spatial scales, and how these perceptions interact with the mechanisms responsible for discriminative stimulus control. As such, the remainder of this chapter focuses on how pigeons process global and local structure. For the most part, these experiments employ hierarchically-arranged stimuli to examine these issues. Hierarchical stimuli consist of smaller local elements arranged to create larger global structures within the same stimulus. In the embedded texture stimulus, two different local elements are used to create a global, odd "target" area that contrasts with the surrounding "distractor" area. With such stimulus arrangements, subjects can be asked to discriminate information about the local level, the global level, or both levels. The resulting data can then be used to see how the different levels were processed and how they interacted with one another. The first two sections of this chapter describes research employing such hierarchically-arranged stimuli to investigate the mechanisms of perceptual grouping and part/whole processing in pigeons. The last section of the chapter describes a slightly different approach by testing the pigeons with dynamic object stimuli. 

II. Grouping and Perceptual Organization

This first section reviews some of our past research using hierarchical texture stimuli, like the one above, to examine the processes of early vision and perceptual organization in the pigeon, especially in relation to the roles of similarity, spatial factors, and timing in determining control by global information. Early vision consists of those processes responsible for taking unrefined visual patterns from the retina and rapidly transforming them into perceptually organized groupings of edges and larger surfaces. These edges and surfaces presumably are the building blocks for the subsequent perception and recognition of global objects. As such, texture stimuli have turned out to be an excellent vehicle for studying these types of processes in humans (Beck, 1966, 1982; Julesz, 1981; Marr, 1982) and pigeons (Cook, 1992a, 1992b, Cook, Cavato, & Cavoto, 1996). 

Role of dimensional similarity

Studies using texture stimuli have found that the human visual system can quickly group similar color and shape features into global spatial regions and then Click here to see color texture stimulusrapidly segregate them at their boundaries or edges in order to begin establishing figure-groundClick here to see shape texture stimulus relations within a scene. For example, you should have no problem immediately segregating and locating the odd “target” region in these color and shape texture stimuli. This is because their different local features are perceptually grouped by separate dimensional channels early on in the discrimination of larger areas of similar colors or shapes (Treisman & Gelade, 1980).

 On the other hand, stimuli that violate this dimensionally consistent organization become much harder to visually segregate as illustrated in the Click here to see conjunctive stimulus following conjunctive display. It, too, has a small odd region, but it is much harder to locate. Even after finding it, the target/distractor edges are still never distinct (the target, by the way, is the region of pink circles and red triangles in the lower left part of the display). Because conjunctive texture stimuli are made from regions formed by unique combinations of elements, they require conjoining features from both the color and shape dimensions to identify the target, an act made more difficult by the dimensionally-specific perceptual channels of our early visual system. As a consequence, several successive scans of the display involving focal attention are needed to identify and locate the target in such mixed displays. These differences in feature and conjunctive search have suggested that multiple processes are involved in visual search. Treisman has argued that one process involves the immediate, simultaneous, and preattentive processing of the different visual features present in the array (Treisman & Gelade, 1980; Treisman & Gormican, 1988; Treisman & Sato, 1990). This process is responsible for the visual "pop out" and rapid detection of dimensionally consistent targets. The second process involves the serial application of focal attention over the display, analogous to a spotlight searching over a large area. Treisman has argued that this latter mechanism is extensively used or required for the accurate detection of conjunctively-organized targets. Similar ideas involving the early parsing and subsequent combination of dimensional visual information can also be found in many other theories of human and machine vision (e.g., Barrow & Tenenbaum 1978; Broadbent, 1977; Cave & Wolfe, 1990; Duncan & Humphreys, 1989; Hoffman, 1979; Marr, 1982; Neisser, 1967). 

Our initial question was whether pigeons would show this same fundamental perceptual effect when tested with feature and conjunctive stimuli. In our texture discrimination experiments, pigeons are trained and tested in operant chambers using computer-generated textured stimuli that always contain an odd target region that differs from Click here to see video of target localization task the surround in either color, shape, or a combination of these dimensions. The pigeon’s task is to locate and peck at this randomly located target region in order to obtain food reward. A pigeon successfully performing this “target localization” task over several trials can be seen in the following video clip. If the pigeons do not successfully locate the target in a display, they receive a brief timeout in the dark as can be seen in the Click here to see a video of a pigeon making an error in this task next video clip of a pigeon making an error in this task. How accurately and quickly the pigeons can locate and peck at the small, odd, "target" region in such displays can then be used to measure their processing of the display. 

Using this target localization task, we investigated how pigeons responded to feature and conjunctive texture displays similar to those tested with humans (Cook, 1992b, Cook, Cavoto, & Cavoto, 1996). In the most precise test of this approach, Cook et al., (1996) examined stimuli made from combinations of two (color & line orientation) or three (color, line length, & line orientation) dimensions and patterned directly after the visual search stimuli tested with humans by Wolfe, Cave, and Franzel (1989). The target region of the feature displays always differed along one of these dimensions, while the other dimensions were allowed to vary irrelevantly. This type of organization permits only the global structure to mediate their discrimination (Cook, 1992b). Examples of these feature stimuli for the two and three dimension conditions can be viewed in the table below. The target region of the conjunctive displays composed from two dimensions were made by the unique combination of single values from each dimension, but were shared in part with the distractors. In a similar manner the conjunctive targets composed from three dimensions were also unique combinations of features, but that shared either one or two values with each distractor. 

Examples of Feature and Conjunctive Stimuli tested by Cook et al. (1996)
C
lick on each image below to see expanded version of an example display

 Feature - Two Dimensions

Color
Click here to see color feature texture stimulus

Orientation

Click here to see orientation feature texture stimulus

 

 Conjunctive - Two Dimensions

Conjunctive
Click here to see conjunctive texture stimulus

 

 Feature - Three Dimensions

Color
Click here to see color feature texture stimulus

Orientation
Click here to see orientation feature texture stimulus

Size
Click here to see size feature texture stimulus

 Conjunctive - Three Dimensions

1-Shared
Click here to see a 1-shared conjunctive texture stimulus

 2- Shared
Click here to see a 2-shared conjunctive texture stimulus

 

We found that pigeons, like humans, varied in their search efficacy depending upon the display’s dimensional organization. In one experiment, we consistently tested the pigeons with the same target region over an extended number of sessions (Cook et al., 1996, Experiment 2). Their increased familiarity with the target’s identity caused the birds to show their responses to these stimuli in terms of how long it took them to locate the target, rather than in the more typical form of an accuracy difference (Cook, 1992b; Experiments 1 & 3 of Cook et al., 1996). This reaction time (RT) measure allows a direct comparison to the performance of humans tested with similar stimuli.Click here to see comparison of human and pigeon results The next figure shows these RT results for both humans and pigeons for these different types of displays. Both the pigeons and humans were best at localizing targets in the unidimensional feature displays relative to the dimensionally mixed conjunctive displays. Furthermore, the pattern of search exhibited for the different conjunctive arrangements was also the same for both species. In this same set of studies, we also found three other key similarities between pigeons and humans in how they process textured dimensional information: 1.) the number of distractors present in the displays differentially influenced feature and conjunctive search performance, 2.) each species showed no search interference in the presence of irrelevant dimensional information, and 3.) each showed a benefit from combining redundant and relevant dimensional information from multiple dimensions. These similar behavioral reactions suggest that the early visual registration and search processes of these different vertebrate species are organized in a comparable fashion, at least with regards to the early processing of globally-relevant dimensional information.

Spatial properties

Besides similarity, spatial factors, such as element proximity, have also been identified as an important factor involved with the global discrimination of multi-element textured displays (e.g. Julesz, 1981). One informative approach in humans has been to investigate displays composed from small random dots (see the display to the right). Once again you should have little difficulty in identifying the location of the odd "target." The important feature of dotted displays is that the observer cannot rely on local information to make the discrimination, since each dot is identical in size, shape, and luminance. Instead, the observer is required to integrate attributes of the texture's local dot geometry into a global percept in order to see the larger emergent structure in the display. Because of this property, dot textures have played a critical role in isolating the grouping and integration mechanisms underlying texture perception in humans (Barlow 1978; Burgess, Wagner, Jennings & Barlow, 1981; Julesz, 1981; Uttal, 1976), and increasingly so in birds (Blough 1985; Bischof, Reid, Wylie, & Spetch, 1999; Cook, 1993a, 1993b).

This section describes several previously unpublished experiments testing pigeons with dotted texture displays. Our first goal was to see if pigeons could simply discriminate dotted texture displays with the same degree of success that we had previously found with similarity-based dimensional displays. If so, we were then interested in determining if they would be affected by the same basic variables Click to see example dot density display known to influence human performance with dotted displays. Given these goals, we tested pigeons with two types of dotted texture displays. The first type consisted of dot density displays produced by either increasing or decreasing the probability of a dot occurring within the randomly located target region vis-ŕ-vis the surrounding region of background dots. These displays correspond to the first-order texture differences in Julesz's general scheme for describing the statistical properties of texture displays. TheClick to see example dot density display regions of such displays differ on a number of properties, among which include dot density, overall luminance, and the average spacing between the dots. The second type consisted of dot spacing displays, produced by varying the average distance between these dots in each region, while holding the average dot density equivalent. The regions of such displays readily appear to the human eye as differing in "clumpiness" and correspond to the second-order texture differences of Julesz's classification scheme.

Four pigeons were tested using computer-generated random dot stimuli in a two-alternative choice task. In this task, the pigeons were required to peck at the half of the display containing the target region to receive a food reward. The target region was randomly located on the left or right half of the display over trials, and consisted of a 6-cm square area of dots that differed from its surrounding dotted context in either dot density or dot spacing. The total size of the display was 20.4 cm by 15.3 cm. For both dot density and dot spacing displays, we selected values for each type that were relatively easy for the human eye to discriminate and seemed about similar in difficulty (but no systematic attempt was made to equate discriminability between the display types). For density displays, we used two combinations of density, with the target appearing equally often as being either denser or sparser than the density of the surrounding dots. For the spacing displays, the density of dots in the target and distractor regions were fixed to be equal over a 6 cm area of the display and the range of values separating each dot was varied, producing displays in which one part of the dots were disorganized and appeared more clumped in comparison to the other. For dot density and spacing displays, the target and distractor regions were comprised from two different combinations of values. In addition, two different sized dots (1.5 mm & 2.7 mm) were tested. These different factors, in combination with the randomization of a dot’s location, produced highly variable, trial-unique displays at the local level, but ones with a consistent and readily visible structure if the birds were capable of processing their global organization. 

The birds learned this discrimination relatively easily, indicating they could detect the global regularities in the displays. The next figure shows mean acquisition data for the four Click to see acquisition results pigeons over the first fifty sessions of training. In general, dot density displays were easier for the birds to discriminate than dot spacing displays, a result consistent with what has been found in humans (Pollack, 1972). Reaction times to both display types were similar and relatively rapid, with the bird's first peck recorded about one second after the onset of the display over the last ten sessions of training (mean RT for each display type: dot density = 1128 ms, dot spacing = 1033 ms). There were no significant effects of dot size for either display type. Overall, the major implication of these results is that the pigeons could easily and quickly perceive the global structure of these dot texture displays, integrating sufficient information from the density and spacing of the local dots to locate the global target's position.

Further evidence consistent with this idea comes from subsequent observations in which we indirectly manipulated the display’s visual angle. For this study, we systematically varied the position of the computer display relative to the front touch panel of the chamber, testing the pigeons with the display monitor moved backed either 0, 3, 6.5, 9.5, or 13 cm from the viewing window. Black panels attached between the monitor and front panel closed the visual gap created by this physical operation. With the monitor pulled back, the visual angle of the display and its correlated properties, such as dot size, were proportionally reduced. This reduction in visual angle should allow the birds to see a greater proportion of the display with any one glance and perhaps increase their capacity to detect the display's global structure.

The next figure shows data from the last 25 sessions of this visual angle experiment. ThisClick here to see visual angle results figure shows the mean and individual performance for each of the birds with dot density displays of intermediate to high discriminability (as determined by earlier experiments). Overall, the accuracy of locating the target increased as the monitor was moved back to a distance of 6.5 cm, after which there was a slight, but steady, decline in performance. This distance relation was found for all four birds and for both dot sizes tested.

The increase in accuracy with viewing distance suggests that reducing the visual angle may have aided the birds in detecting the display’s global structure. Other factors could have also played a role as well. For instance, these distance data are consistent with some previously published data on the near point of accommodation in pigeons (Macko & Hodos, 1985; Hodos, Leibowitz, & Bonbright, 1976). Hodos and his colleagues determined that the closest possible point in the frontal field permitting the best focused image was approximately 6 to 7 cm. The similarity between our best monitor distance and their estimate possibly suggests that moving the monitor back may have also increased the sharpness of the image as well. Further increases in this distance did eventually produce a slight decline in performance, suggesting that distances beyond 6-7 cm may begin to tax in some way the capacity of the pigeon's frontal visual field. Other possible reasons for this decline may have been that the birds paid less attention to the stimuli as they moved away or have a harder time accurately directing their responses to the stimuli because of the increased parallax caused by the separation of the touchscreen and monitor. Despite these complications, these results suggest a practical benefit to those studying visual perception and discrimination in pigeons. In the vast majority of such laboratory experiments the stimuli are presented very close to the animals. The above data suggest that even slight increases in viewing distance may benefit performance, perhaps by allowing greater control from the display's global properties. Of course, the benefits of this must be weighed against the potential loss of stimulus control that also might result from this separation. Future studies need to focus more systematically on the role of stimulus viewing distance in the discrimination of complex stimuli by these animals and its possible impact on the extraction of local and global information.

Temporal properties

Perceptual grouping in humans is typically completed within 150 ms after the onset of the stimulus. In our lab, we have tried in several ways to quantify just how quickly this same type of global information is processed in pigeons. In one of our experiments (Cook, Cavoto, Katz, & Cavoto,1997), we used a modified rapid serial presentation procedure to see how quickly pigeons might be able to group differences in textured displays (for more details about this specific study, go to Cook, Cavoto, Katz, & Cavoto's (1997) website). Pigeons were tested with odd-item texture stimuli that rapidly changed their target and distractor colors across frames within a trial. During a trial, a pigeon might briefly see a target of red squares on a background of green squares for 100 ms, followed by a change to blue on yellow squares respectively, then to orange on white and so on, until they completed a response to the display (see the illustration to right, note that the real displays were made from multiple small local elements and were not filled in as in this example). In order to localize these constantly changing targets, the pigeons need to process the global differences fast enough to at least partially determine the target’s location within the time of a single "frame" of the display (although this partial information may accumulate over the separate frames). In the experiment, we tested the pigeons with displays in which either the target and distractor both changed (display-variable conditions), just the target region changed (target-variable conditions), or just the distractor region changed (distractor-variable conditions) across each successive frame. We then varied the timing of these Click here to see animated examplessuccessive changes, with each frame last either 100, 250, 500, or 1000 ms within a trial. (click here to see animated real-time examples of these test displays from the Cook et al, (1997) website) For our current proposes, the most important finding from these experiments was that the pigeons performed at above chance levels even when the entire display was changing every 100 ms. This suggests that the birds rapidly group global color differences in about the same amount of time as established for human texture segregation. 

  Summary of the above texture results

As a set, these texture studies indicate that pigeons can integrate and use global information derived from different types of local elements when presented as large visual patterns. Further, the many similarities in experimental outcomes suggest that pigeons and humans share highly analogous mechanisms for processing this type of visual information, despite the differences in the size and neural organization of their respective visual systems. These results, along with those of others, suggest that these mechanisms similarly function to detect edge and contours over large areas of the visual field, and that they are organized into separate dimensional channels that are strongly influenced by element similarity and proximity. We have argued that this similarity is likely no accident and represents the common need for both species to solve the fundamental visual task of locating and identifying the global boundaries and surfaces of objects.

III. Global and Local Control in Hierarchical Stimuli

The above texture experiments were focused on studying the processing of a display's global organization. Because of their design, they were not well suited, however, for examining the specific interaction between local and global information within a display. One of our next projects was thus directed at determining more precisely just how pigeons processed both levels of organization from within the same stimulus (Cavoto & Cook, 2001). This question has been explored in humans by testing hierarchical stimuli in which the subjects are simultaneously responsible for identifying relevant information from either level. In many such tests, the local level is defined by small individual letters, which are then used to also configure larger global letters (see the examples in the top two rows in the picture below). In humans, the experimental evidence has generally suggested that the overall or global configuration of such hierarchical stimuli is typically processed before the local details-- a finding known as the global precedence effect (Navon, 1977, 1981, 1983; although a long list of factors can alter this basic result). Similar global-like effects with humans can also been seen in the word superiority effect (Reicher, 1969) and various configural superiority effects (Weisstein & Harris 1974; Pomerantz, Sager, & Stover, 1977) that have been reported. Would pigeons show a similar pattern in processing the two levels?

Our experiment tested four naďve pigeons in four different stimulus conditions, two consisting of hierarchical stimulus patterns similar to those tested with humans and two size-matched solidSlightly degraded examples of Cavoto and Cook's stimuli forms. The two types of hierarchical stimuli were composed of a combination of four different relevant letters (T, N, X, H) and an irrelevant letter (O). In the global-relevant hierarchical condition (the top row to the right), the stimuli consisted of the local irrelevant letter arranged to form each of the four relevant letters. As a result, in these stimuli only the global level contained the information needed for the subsequent four-alternative test. This test required the birds to report which one of the four relevant letters had been presented during that trial. The local-relevant hierarchical condition (the second row) were composed of the relevant letters arranged to form the irrelevant letter 'O' at the global level. As such, only the local level of these stimuli contained information relevant for the test. On any one trial, only one or the other organizational level was relevant, but both levels were tested equally often within a session, thus requiring the birds to process both levels. The other two stimulus conditions consisted of testing the four relevant letters as solid forms matched in size to the hierarchical conditions. The global-equivalent stimuli (third row) were matched to the size of the global- relevant stimuli. The local-equivalent stimuli (bottom row) were simply the local letters used to create the local-relevant condition presented as single letters. 

These four conditions were tested using a four-alternative choice procedure. Following the presentation of a stimulus from one of the four stimulus conditions at a random location within an area of the computer screen four choice stimuli appeared, one in each corner of the test screen around the viewing area. Each choice stimulus was associated with one of the four different relevant letters. The pigeons’ task was to correctly peck at the choice stimulus associated with the relevant letter that had appeared on that trial regardless of whether it had appeared at the global or local level.

For the purposes of this chapter, we concentrate here on only two key observations from Cavoto and Cook's (2001) four experiments. The first result concerns how quickly the birds learned to discriminate each of the four conditions. The second concerns the order in which information from the global and local levels comes to control choice behavior as a function of stimulus exposure duration.

The next figure shows the acquisition of the letter discrimination task for the four stimulus conditions. Two results should be Click here to see results noted. The first is that the local relevant discrimination was learned much faster than the global relevant discrimination. In fact, the last two panels in the figure represent separate attempts to improve specifically the discrimination of the global-relevant condition. With experience the birds did get better in the latter condition, but overall they were clearly better at discriminating the local letters in the hierarchical conditions initially. This was true despite the fact that these small letters were harder to discriminate than the larger letters in the size-matched controls, as revealed by the slower and poorer performance of the local-equivalent condition. The second result that should be noted is that accuracy with the size-matched global equivalent condition was superior to that with the global-relevant condition. This indicates that the observed local advantage during acquisition was not due to the larger visual angle and size of the global-relevant stimuli. Instead, the problem with the global-relevant condition seems due to its overall organization. Thus, unlike the prior texture results, the figural letter stimuli tested here tended to produce a local advantage where the local information seemed to dominate global information.

To further investigate this local advantage, we conducted three additional experiments. In the last of these we manipulated the duration of stimulus presentation. The purpose of the experiment was to examine in more detail the time course over which local and global information controls behavior during a single stimulus presentation. To do this, we tested a new "probe" stimulus condition. These probe test stimuli consisted of hierarchical stimuli in which relevant letters were presented at both levels of organization at the same time (see example to the right). When tested with these “conflict” probe stimuli the birds are free to report either level, and this report presumably reflects the relative degree of processing devoted to each level. Given these test stimuli, would the birds tend to report the local or global level more often or a mixture of both? More importantly, how might these reports change as a function of how long the stimulus had been displayed? Because competing information was being presented simultaneously at the two levels, we made several minor adjustments to the procedures in order to increase its precision. These adjustments were designed to insure that the pigeons could just as easily discriminate both the local and global letters (see Cavoto & Cook, 2001 for details). This way, any difference shown in responding to the conflicting levels in the probe stimuli would represent the accumulation of information from each level, rather than the discriminability of stimulus information from each level.

In the experiment, we tested the “conflict” probe stimuli at controlled stimulus durations to examine the temporal development of control by the global and local levels. In total, five different durations (.25, .50, .75, 1.75, or 5 seconds per presentation) were tested with these stimuli. These probes were randomly mixed in among the four stimulus conditions described previously, along with other probe stimuli in which the two levels were redundant with each other. The next figure shows the mean responses averaged across all four birds to these conflict probe stimuli as a function of stimulus duration. Unlike the acquisition results, the dependent variable of most interest here is not choice accuracy but instead the proportion of choices to the different test stimuli associated with relevant letters presented in the stimulus. The figure shows that over all durations, the majority of choices were to the relevant letter presented at the local level. This local bias is consistent with the local advantage found during acquisition, where the development of the local-relevant discrimination preceded the global-relevant discrimination. A closer examination of the data reveals informative differences among the pigeons in their responses to these test stimuli.

The next figure shows the conflict probe test data for each bird. Depending upon the bird, the local advantage changed in interesting ways as a function of Click here to see individual bird resultsduration. Two birds (#1H & # 3N, the left two panels) consistently identified the local letter of the conflict stimuli more often across all stimulus durations, although this was reduced at the shortest durations. In contrast, the other two birds (#2R & #4B, the right two panels) showed systematic changes in the proportion of global and local choices as a function of stimulus duration. At the short durations, these birds reported the local letter most often, while at the longer durations the global letter was reported significantly more often. Thus, for two birds the local advantage was present across all durations, while the two other birds shifted to a global advantage at the longest duration. This pattern suggests the birds had different, but consistent, behavior patterns or strategies for initiating processing of the global level of these hierarchical stimuli. Remember that despite the apparent precedence of local information in the early experiments, the contingencies and procedures of the global relevant condition required the birds to process the global level on a large percentage of the trials within a session. What these individual data reveal is that this global processing is accomplished in different ways. Two birds, #2R and #4B, used a time-based strategy, such that as time passed within a trial, stimulus control shifted from the smaller to the larger spatial scale of the hierarchal stimuli. The other two birds, #1H and #3N, seem to have adopted a different strategy. Time appears not to be a factor, as these birds showed a local advantage across all durations. For these birds, the cue to begin processing global-level information was likely the presence of the irrelevant local letters. This would explain why these two birds always chose the local letter in the conflict condition, as these stimuli contained no irrelevant letters to initiate this switch.  

The results of these two experiments indicate that these pigeons exhibited a strong local advantage in processing these hierarchical figural stimuli. Further, the conflict probe data suggest that this advantage starts early in the processing of these stimuli. The tendency to make choices based on local information at short presentation times suggests this level was available first for processing or had a higher priority level than the global-level information. Why might this be? In general there are two classes of explanations for such precedence effects. The first class attributes the effect directly to stimulus factors, such as stimulus size or arrangement, while the second attributes it to cognitive factors, such as differences in attention or memory processing.

Concerning possible stimulus factors, one might argue that the local letters in the current tests were more salient than the global letters because of the different visual angles they subtended. Several lines of evidence argue against simple stimulus size being directly responsible, however. For example, we consistently found that the larger, solid letters supported higher accuracy than smaller letters. This suggests that visual angle per se was not the critical factor.

Another possibility is that the birds had a specific problem in grouping together the disconnected elements of the global-relevant stimuli. During acquisition, the birds were generally better with the solid global-equivalent condition than the global-relevant condition, suggesting that joining the separated elements of the global condition into a more complex figural pattern was problematic. This factor is a known and important contributor to the local advantage reported for non-human primates (Deruelle & Fagot, 1998, Fagot & Deruelle, 1997; Fagot & Tomonaga, 1999) and may play a role for pigeons (Donis & Heinemann, 1993). The major argument against such an explanation in the current test is that we equated the global and local levels in the critical conflict stimuli. Thus, whatever effect the separation of the elements had, if any, it was at least compensated by other factors that permitted global performance to be equivalent to that with the local level. This suggests that the local advantage in the conflict probe test was not directly due to “figural integration” failure. We do think this is an important avenue to explore in future research, however.

If stimulus factors alone do not explain the effects, then one might begin looking at cognitive mechanisms. One possible reason the majority of animal studies have reported a local advantage is that local level features may have perceptual or attentional priority, especially at close range. A second possibility is that global and local information are processed in parallel at the perceptual level, but local information becomes available more quickly in memory. The attentional explanation is essentially the complement of Navon’s (1977) original explanation for the human global precedence effect. While the strong form of Navon’s sequential hypothesis is generally not accepted, others have continued to speculate that the global perceptual channels are processed faster or become available sooner in humans, perhaps because of their lower spatial frequencies or the organization of the different brain regions responsible for separately processing these different levels (Delis, Robertson, & Efron, 1986, Ivry & Robertson, 1998). One intriguing possibility for why pigeons may have showed a perceptual/attentional-related local advantage may stem from the structure of their visual system, a speculation further discussed in the conclusions. 

Finally, it should be noted that these results conflict with those recently reported by Fremouw, Herbranson, and Shimp (1998). They tested pigeons with hierarchical stimuli in a manner very similar to our procedure (for more details and the results from these experiments, see Shimp, Fremouw, &Herbranson, 2001). They found a different, and moreClick to see Fremouw et al's (1998) results human-like, pattern of results with their birds. In their task, the pigeons were tested in a two-alternative choice task where the pigeons had to report forms or letters presented at either the global or local level. After learning the task, they Click to see Fremouw et al's (1998) resultsthen manipulated the relative probabilities of a locally-relevant or globally-relevant display within a session. Their primary result was that their pigeons responded faster to the more frequently tested level. That is, when local trials were more frequent the pigeons were faster to respond to local information than when global trials were more frequent, and vice versa. Fremouw et al. (1998) suggested that these frequency priming data indicate that pigeons can flexibly shift their attention between the different levels of hierarchical stimuli in a human-like way. It is not easy at the moment to reconcile our different outcomes. We manipulated several additional factors (random stimulus placements, mixes of different-sized stimuli, more letters to be discriminated, see Cavoto & Cook, 2001) in our study that we believe significantly improved on the design used by Fremouw et al. (1998), but proper resolution will have to await future research.

IV. Object and Motion Perception  

In this last section, we briefly touch upon some of our recent research that further speaks to the issue of control by global and local structure, although it does so using a different approach than employed in the two previous sections. In this research we have investigated more directly if and how pigeons might perceive the global structure of object-like stimuli. In these experiments the pigeons were tested in discriminations involving 3D projections of objects that move dynamically on the computer screen. Motion, due to the movement of the observer or the object, is an essential property in viewing most objects, (for a review, see Dittrich & Lea, 2001). The contribution of motion to perception and action in animals has been long overlooked, although these topics are now beginning to receive more attention. By adding motion to our stimuli we hoped to encourage the pigeons to view these stimuli in a more unitary and object-like way. Two different experiments are discussed below. In the first experiment, we added motion to the objects to see how it might enhance their discrimination (Cook & Katz, 1999). In the second experiment, we used different types of motion as portrayed with object-like stimuli to form the basis of the discrimination (Cook, Shaw, & Blaisdell, in press).

Discriminating objects in motion

The major question in the first experiment was whether the pigeons would benefit from the addition of motion when discriminating among differently shaped objects. The pigeons were trained and tested in a discrete trial go/no-go discrimination with computer-generated projections of cube and pyramid object stimuli presented one at a time. On half the trials, these objects were presented statically. That is, they did not move within a trial. Depending upon the experiment, each static trial tested a randomly selected angle from around either their Y- or X-axes. On the remaining half of the trials, the objects were presented dynamically, appearing to rotate or spin around one or both of these axes. In both cases, the pigeons’ task was to peck at the object associated with reinforcement. For two birds this object was the cube, and for the two remaining birds it was the pyramid. Two questions were of interest. First, was the pigeons’ discrimination of these objects mediated by the simple 2D local features of these computer images, or based instead on a higher-order representation of them as 3D objects? Second, what contribution would motion make to this discrimination?

From a variety of different tests, we concluded that the pigeons were likely controlled by the 3D properties of these stimuli, and that this was enhanced by the motion. Although several 2D hypotheses were considered, these alternatives were rejected based on various combinations of the Click here to see results evidence (for details, see Cook & Katz, 1999). For instance, we found the dynamic presentation of the objects consistently supported better discrimination than the identical set of static views collected one at a time. This dynamic superiority effect was especially clear for those cases where the appearance of the object was transformed or distorted by its rotation, suggesting that the motion enhanced object recognition. Further, we found their discrimination was relatively immune to transformations in the size of the objects, the rate and direction of motion, different combinations of motion, and changes in their surface color (at least in the dynamic condition). This type of transformational invariance is highly characteristic of human object recognition. Similarly for the pigeons, it suggests their discrimination of these stimuli was not tied to specific image properties, but was mediated by a more generalized and flexible representation. Lastly, three of the four pigeons also showed some Click here to see stimulievidence of recovering the structure of these objects from just the pattern of their motion. In this case, they were better at discriminating dynamic presentations of the stimuli when all contour and surface information had been removed (i.e., 2D monochromatic colored blobs moving consistently with the rigid projective geometry of either a cube or pyramid) than when presented statically. A result also consistent with a 3D hypothesis.  

Discriminating motion from objects

Our second set of experiments also involved combining object-like stimuli and motion. In these studies, we investigated Click here to see object set whether different motions produced by objects could be the basis for a go/no-go discrimination. Specifically, the pigeons had to discriminate among video stimuli portraying the actions of “through” and “around” in relation to a number of different Click here to see timing and path information objects. Using computer animation software, video stimuli were created that appeared to either go around an approaching object or fly through its interior opening. The next figure shows the trajectory and timing of these pathways, while the following figure shows the variety of different objects used to create these different motions.

Example videos of each of the motion pathways can be seen in the following two links (around video clip / through video clip).Click here to see the around example Our first experiment using these stimuli explored the acquisition and transfer of thisClick here to see the through example “through/around” discrimination. Seven birds were tested in all. Four of them had prior experience with other discriminations, including a variation of Cook and Katz’s (1999) dynamic object discrimination. We tested the four experienced birds first. Using a go/no-go discrimination, two of these birds were reinforced on a VI schedule for pecking at the “around” video sequences (Around+) and two for pecking at the “through” sequences (Through+). The other motion sequence was designated the S- and pecks to it were never reinforced during training. Five different objects were initially used to present each type of motion. At first, each video lasted approximately 3 seconds and was repeated a little over six times in succession during the 20-second period forming each S+ and S- trial. The birds showed little learning at first, requiring several procedural modifications (timing changes, additional punishment) in an effort to promote learning. The change that eventually promoted learning was reducing the number of objects being tested from five to one. The three naďve pigeons were then trained from the beginning with only one object and only using the Around+ discrimination. Several interesting trends were observed. Overall, the Around+ discrimination seemed easier to learn than the Through+ discrimination. The two experienced and three naďve birds all learned this discrimination readily. However, the two experienced birds in the Through+ condition failed to learn (see Cook, Shaw, & Blaisdell, in press for some speculations and caveats about the reason for this latter failure).

Following acquisition, we transfer tested the five successful pigeons with videos containing novel objects that differed in color, shape, and material from those experienced during training. We hypothesized that if the pigeons had learned to discriminate these videos based on a generalized representation of the motion, then the specific identity of the objects shouldn’t matter. This was exactly what we observed as shown in the next figure. The birds generally showed above chance discrimination on non-reinforced test trials with these new videos, although some variability seemed to be added by the complexity of the surface appearance used to render these new objects.

As with the prior experiment the key question is once again: are the birds controlled by the 3D properties of the object or are they simply picking up on simple 2D image properties that mediate the discrimination? Our next experiment tried to begin answering this question by seeing what effect randomizing the order of the different frames within a video had on performance. The logic here was that if the birds were relying on 2D cues, these would still be present in these randomized videos. For instance, in the Around condition the objects move off to the right and fill that side of the display, while in the Through condition they symmetrically fill and move off to both sides of the screen. If such positional 2D cues were all that controlled the discrimination, then performance in the randomized condition should match or come to match that of the coherent baseline conditions because these same 2D cues would still be present. If, on the other hand, they were tending to see this as an approaching 3D object, then this randomization should disrupt their discrimination, since it interferes with seeing a coherent object sequence. In two different tests, we tested novel objects where the frames of the Click here to test results videos were presented either in a coherent or randomized order within the same session. In both conditions, we reinforced the randomized video presentations in the same way as with the coherent trials, with “Around” presentations reinforced on a VI schedule and “Through” trials producing a short time out. The next figure shows the results from the second of these tests. We found in both tests that coherently ordered presentations of the motion discrimination supported better discrimination than did the randomized presentation of the same video containing the novel object. This was true even after multiple sessions of testing with the randomized presentation. This difference between coherent and randomized presentation indicates that the temporal sequencing of the individual frames was critical to their discrimination.

Why did the randomization of the frames disrupt discrimination? We considered three possibilities. First, this drop in performance may represent a form of generalization decrement to the randomized condition. That is, the birds' entire experience prior to this point was only with coherent sequences, and as such the randomized videos may have been too different from their prior experience. Such experience-based explanations, however, are not consistent with the general failure to see much improvement in discrimination with continued experienced. Instead, the randomization of the frames seems to prevent critical sequential information from being extracted from the video.

A second possibility is that the birds were somehow relying on the timing between frames to perform the discrimination. For example, the birds may have used the start of the video to anticipate when to look for 2D cues in the latter frames. If so, then the frame randomization simply breaks up this temporal cueing. However, in experiments where we have altered the timing of the frames, but not their content, the birds have continued to perform well. The latter results suggest that the basic timing of the video's frames is not critical to the discrimination.

The intriguing third possibility is that the birds perceived the videos much as they were intended – as 3D objects being approached in depth over a textured ground. In this case, the randomization manipulation is effective because it disrupts the perception and global integration of the movement and depth cues across frames. This disruption effect is consistent with a general hypothesis that the pigeons might be interpreting these videos as showing objects that are being approached. Of course, this conclusion must be accepted provisionally pending more tests of the 2D and 3D interpretations of these very complex stimuli (e.g., Reid & Spetch, 1998). Nevertheless, what does seem increasingly clear from these types of experiments is that adding motion creates the best opportunity for testing how pigeons integrate temporal, motion, depth and surface information into their perception of the visual world's objects.   

V. Conclusions

The lines of research reviewed above paint a fascinating, but complex, portrait of how global and local information interacts in controlling different visual discriminations by pigeons. The experiments employing a variety of texture stimuli suggested these animals can readily and quickly extract global differences in large arrays that contain strong edge and surface-like characteristics. In contrast, the experiments in the second section testing hierarchical figural stimuli suggested that local information was more important, being processed first and having a stronger influence on the initial formation of stimulus control. The last line of research testing dynamic object-like stimuli seem to suggest control by the global integration of object and motion properties over time. When combined with previously collected results from other labs, it appears clear that pigeons are quite elastic in how they process information from different spatial scales of a visual scene -- being controlled by local, global, or both levels depending on the situation. How can these apparently conflicting results be reconciled? As is much the case with humans, the answer is going to have to be more complex than simply saying that one level precedes the other or that one consistently dominates the other.

The key now to understand the flexibility of this system and what factors control and influence in birds, and how these compare to what has been established in humans. Several factors are hinted at by the results described above. These include: general and specialized properties of the avian visual system, attentional factors controlled by the history of reinforcement, and stimulus factors such as feature salience, motion, configural organization, and viewing distance. A brief comment about each is perhaps in order.

One potentially important factor that has been ignored in this area of inquiry concerns the specialized structure and functions of the avian visual system (Husband & Shimizu, 2001; Zeigler & Bischoff, 1993). Animal visual systems have presumably evolved to extract the most useful features of the physical world, and avian systems have several characteristics that might influence the processing of hierarchical and spatial information. Pigeons, for instance, have two specialized areas or foveae in their eyes, which may serve different functions (Bloch & Martinoya, 1982, 1984; Catania, 1964; Jager & Zeigler, 1991). One of these areas is specialized for binocular perception of the visual space immediately in front of the bird. This frontal visual field has presumably evolved for myopic foraging for food on the ground. The second area is specialized for wide field monocular perception of the visual area around and lateral to each side of the bird. This lateral visual field has presumably evolved for predator detection and flight control. Because of the spatial proximity and central location of our stimuli, it is highly likely they were viewed primarily with the frontal field (Goodale, 1983). If so, its near-sighted acuity or potentially specialized capacity for examining fine stimulus details may be responsible for the local advantage observed in some of our experiments. In contrast, the lateral visual fields may be more concerned with the larger scale integration of scene and flight control information (Martinoya, Rivaud, & Bloch, 1984), and might be more sensitive to global information. Perhaps for those discriminations that require the processing of more global information, the pigeons attempt to use their lateral fields more often to view these stimuli. Research with humans has found that retinal location may influence processing in a similar way, finding local precedence with centrally located stimuli and global precedence with peripherally located stimuli (Lamb & Robertson, 1988). A better understanding of the different functions of these visual fields and their influence on the processing of information at different spatial scales is one key direction for future research.

Besides sensory, perceptual, and neural considerations, the role of attention is an important cognitive factor that also needs to be examined. If nothing else, these experiments show that pigeons can flexibly report information about whatever scale is currently being reinforced by the experimenter. What we need to better understand next is how this attention is deployed spontaneously and how attention to the different levels interacts when both are simultaneously relevant. So far, the two pigeon studies conducted on the latter issue have revealed slightly different stories about the relative flexibility and salience of how attention might be deployed to each level (Cavoto & Cook, 2001; Fremouw, Herbranson, & Shimp, 1998). How reinforcement influences the distribution of attention across the different levels of a stimulus is yet another important avenue for future research (for more on the issue of attention in birds in related domains, see the chapters by Shimp, Fremouw, & Herbranson, 2001; Sutton & Roberts, 2001; & P. Blough, 2001).

Another perceptual/cognitive factor to consider is the depth and type of processing required by the discrimination. For instance, the global control seen in the texture experiments may be because they tap early visual processes whose primary function and contribution is to quickly provide information about large-scale features like edges and surfaces and may be an easy discrimination. On the other hand, the figural pattern perception required by local/global experiment may have been in some sense harder and required a more complete and detailed processing of the letter figure, which may have biased the birds to process more local information.

A further consideration also concerns the role of several stimulus factors that appear to determine in part which spatial scale comes to control behavior. The relative salience of featural and configural information certainly must influence the control exerted by each level. For instance, one of the reasons that Cerella (1986) may have found control by features is that cartoon characters, like Charlie Brown, are structured to contain very strong and simple features. Likewise, the global control observed in Wasserman et al.’s (1993) studies may reflect their use of geons to build their objects. Similar considerations might also apply to each of our experimental approaches. 

Viewing distance is another factor which is also likely to influence performance. The random dot texture data above suggest this might be the case. Whether viewing distance influences the ability of the animals to perceptually group information or perhaps bring a different part of the eye to bear on the stimulus, or both will need to be worked out. Closely related to this is how well animals might be able group disconnected separate elements into whole figures. Non-human primates having a great deal of difficulty doing this, especially with sparse arrangements (Deruelle & Fagot, 1998, Fagot & Deruelle, 1997; Fagot & Tomonaga, 1999). Such spatial factors may also play a role in birds.

Motion is also a factor that we think likely influences the type of control. Common motion has long been suspected of serving the important function of binding features together. In our experiments with dynamic stimuli, we hypothesize that the coherent motion of the separate features helps specifically to create larger units of object-like information. If they only had been presented statically, perhaps a different and more local type of control might develop, for instance.

This chapter has provided a wealth of different, sometimes conflicting, data about how pigeons process hierarchical information. For some, it might have been nicer to tell a tidy scientific story with a beginning, middle, and happy ending. In the present one, however, we are still only introducing the main characters and outlining the plot, with the dramatic resolution still many pages (and experiments) away. Given the importance of the scientific questions at hand, it should make for a good read as succeeding chapters are added by new research from the different labs highlighted in this book.

avcrule.gif (935 bytes)

VI. References

Barlow, H. B. (1978). The efficiency of detecting changes of density in random dot patterns. Vision Research, 18, 637-650.

Barrow, H. G., & Tenenbaum, J. M. (1978). Recovering intrinsic scene characteristics from images. In A. Hanson & E. Riseman (Eds.), Computer vision systems (pp.3-26). New York, NY: Academic Press.

Beck, J. (1966). Effect of orientation and shape similarity on perceptual grouping. Perception & Psychophysics, 2, 491-495. 

Beck, J. (1982). Textural segmentation. In J.Beck (Ed.), Organization and representation in perception (pp. 285-318). Hillsdale, NJ: Lawrence Erlbaum Associates.

Bischof, W.F., Reid, S. L., Wylie, D. R. W., & Spetch, M. L. (1999). Perception of coherent motion in random dot displays by pigeons and humans. Perception & Psychophysics, 61, 1089-1101. 

Bloch, S., & Martinoya, C. (1982). Comparing frontal and lateral visual acuity of the pigeon I. Tachistoscopic visual acuity as a function of distance. Behavioral Brain Research, 5, 231-244.

Bloch, S., & Martinoya, C. (1984). Comparing frontal and lateral visual acuity of the pigeon III. Different patterns of eye movements for binocular and monocular fixation. Behavioral Brain Research, 13, 173-182.

Blough, D. S. (1985). Discrimination of letters and random dot patterns by pigeons and humans. Journal of Experimental Psychology: Animal Behavior Processes, 11, 261-280.

Blough, P. M. (2001). Cognitive strategies and foraging in pigeons. In R. G. Cook, (Ed.), Avian Visual Cognition [On-line]. Available: pigeon.psy.tufts.edu/avc/pblough/

Broadbent, D. E. (1977). The hidden preattentive processes. American Psychologist, 32, 109-118.

Burgess, A. E., Wagner, R. F., Jennings, R. J., & Barlow, H. B. (1981). Efficiency of human visual signal discrimination. Science, 214, 93-94.

Catania, C. A. (1964). On the visual acuity of the pigeon. Journal of the Experimental Analysis of Behavior, 7, 361-366.

Cave, K.R., & Wolfe, J.M. (1990). Modeling the role of parallel processing in visual search. Cognitive Psychology, 22, 225-271.

Cerella, J. (1986). Pigeons and perceptrons. Pattern Recognition, 19, 431-438.

Cavoto, K. K. & Cook, R.G. (2001) Cognitive precedence for local information in hierarchical stimulus processing by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 27, 3-16.

Cook, R. G. (1992a). Acquisition and transfer of visual texture discriminations by pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 18, 341-353.

Cook, R. G. (1992b). Dimensional organization and texture discrimination in pigeons. Journal of Experimental Psychology: Animal Behavior Processes, 18, 354-363.

Cook, R. G. (1993a). Gestalt contributions to visual texture discriminations by pigeons. In T. Zentall (Ed.), Animal cognition: A tribute to Donald A. Riley (pp. 251-269). Hillsdale, NJ: Lawrence Erlbaum Associates.

Cook, R. G. (1993b). The experimental analysis of cognition in animals. Psychological Science, 4, 174-178.

Cook, R. G., Cavoto, K. K., & Cavoto, B. R. (1996). Mechanisms of multidimensional grouping, fusion, and search. Animal Learning & Behavior, 24, 150-167. 

Cook, R. G., Cavoto, B. R., Katz, J. S., & Cavoto, K. K. (1997). Pigeon perception and discrimination of rapidly changing texture stimuli. Journal of Experimental Psychology: Animal Behavior Processes, 23, 390-400.

Cook, R. G. & Katz, J. S. (1999). Dynamic object perception in pigeons. Journal of Experimental Psychology: Animal Behavior Processes,. 25, 194-210.

Cook, R. G., Shaw, R., Blaisdell, A. P. (in press). Dynamic object perception by pigeons: Discrimination of action in video presentations. Animal Cognition

Cook, R. G. & Tauro, T. (1999). Object-goal positioning influences spatial representation in rats. Animal Cognition, 2, 55-62.

D’Amato, M. R. & Van Sant, P. (1988). The person concept in monkeys (Cebus apella). Journal of Experimental Psychology: Animal Behavior Processes, 14, 43-55.  

Delis, D. C., Robertson, L. C., & Efron, R. (1986). Hemispheric specialization of memory for visual hierarchical stimuli. Neuropsychologia, 24,  205-214.  

Deruelle, C. & Fagot, J. (1998). Visual search in global/local stimulus features in humans & baboons. Psychonomic Bulletin & Review, 3, 476-481.

Dittrich, W. H., & Lea, S. E. G. (2001). Motion discrimination and recognition.  In R. G. Cook  (Ed.), Avian visual cognition [On-line]. Available: pigeon.psy.tufts.edu/avc/dittrich/

Donis, F., & Heinemann, E. G. (1993). The object-line inferiority effect in pigeons. Perception and Psychophysics, 53, 117-122.

Duncan, J., & Humphreys, G. W. (1989). Visual search and stimulus similarity. Psychological Review, 96, 433-458.

Edwards, C. A., & Honig, W. K. (1987). Memorization and “feature selection” in the acquisition of natural concepts in pigeons. Learning and Motivation, 18, 235-260. 

Fagot, J., & Deruelle, C. (1997). Processing of global and local visual information and hemispherical specialization in humans (Homo sapiens) and baboons (Papio papio). Journal of Experimental Psychology: Human Perception and Performance, 23, 429-442.

Fagot, J., & Tomonaga, M. (1999). Global and local processing in humans (Homo sapiens) and chimpanzees (Pan troglodytes): Use of a visual search task with compound stimuli. Journal of Comparative Psychology, 113, 3-12.

Fremouw, T., Herbranson, W. T., & Shimp, C. P. (1998) Priming of attention to local and global levels of visual analysis. Journal of Experiment Psychology: Animal Behavior Process, 24, 278-290. 

Goodale, M. A., (1983). Visually guided pecking in the pigeon (Columba livia). Brain, Behavior & Evolution, 22, 22-41. 

Hodos, W., Leibowitz, R. W., and Bonbright, J. C., Jr. (1976) Near-field visual acuity of pigeons: Effects of head position and stimulus luminance. Journal of the Experimental Analysis of Behavior25, 129-141

Hoffman, J. E. (1979). A two-stage model of visual search. Perception & Psychophysics, 4, 319-327.

Huber, L. (2001). Visual categorization in pigeons. In R. G. Cook  (Ed.), Avian visual cognition [On-line]. Available: pigeon.psy.tufts.edu/avc/huber/

Husband, S. & Shimizu, T (2001). Evolution of the avian visual system. In R. G. Cook  (Ed.), Avian visual cognition [On-line]. Available: pigeon.psy.tufts.edu/avc/husband/

Ivry, R. B. & Robertson, L. C (1998). The Two Sides of Perception. Cambridge, MA: MIT Press.

Julesz, B. (1981). Textons, the elements of texture perception and their interactions. Nature, 290, 91-97.

Jager, R., & Zeigler, H. P. (1991).Visual field organization and peck localization in the pigeon (Columba livia). Behavioural Brain Research, 45, 65-69.

Kirkpatrick, K. (2001). Object recognition. In R. G. Cook  (Ed.), Avian visual cognition [On-line]. Available: pigeon.psy.tufts.edu/avc/kirkpatrick/

Kohler. W. (1947). Gestalt psychology: An introduction to new concepts in modern psychology. NewYork, NY: Liveright.

Lamb, M. R., & Robertson, L. C. (1988).The processing of hierarchical stimuli: Effects of retinal locus, locational uncertainty and stimulus identity. Perception & Psychophysics,44, 172-181.

Marr, D. (1982). Vision. San Francisco, CA: Freeman.  

Macko, K. A. and Hodos, W. (1985). Near point of accommodation in pigeons. Vision Research, 25, 1529-1530. 

Martinoya, C.,Rivaud, S., & Bloch, S. (1984). Comparing frontal and lateral viewing in pigeons: II Velocity thresholds for movement discrimination. Behavioral Brain Research, 8, 375-385.

Navon, D. (1977). Forest before trees: The precedence of global features in visual perception. Cognitive Psychology, 9, 353-383.

Navon, D. (1981). The forest revisited: More on global precedence. Psychological Research, 43, 1-32.

Navon, D. (1983).How many trees does it take to make a forest? Perception, 12, 239-254.  

Neisser, U. (1967). Cognitive Psychology. New York, NY: Appleton-Century-Crofts.  

Pollack, I. (1972). Visual discrimination thresholds for one- and two-dimensional Markov spatial constraints. Perception & Psychophysics, 12, 161-167.

Pomerantz, J. R., Sager, L. C., & Stover, R. J. (1977). Perception of wholes and their component parts: Some configural superiority effects. Journal of Experimental Psychology: Human Perception and Performance, 3, 422, 435.

Reicher, G. M. (1969). Perceptual recognition as a function of meaningfulness of stimulus material. Journal of Experimental Psychology, 81, 275-280.

Reid, S. & Spetch, M. L. (1998) Perception of pictorial depth cues by pigeons. Psychonomic Bulletin and Review, 5, 698-704. 

Treisman, A., & Gelade, G. (1980). A feature-integration theory of attention. Cognitive Psychology, 12, 97-136.

Treisman, A., & Gormican, S. (1988). Feature analysis in early vision: Evidence from search asymmetries. Psychological Review, 95, 15-48.

Treisman, A. & Sato, S. (1990). Conjunction search revisited. Journal of Experimental Psychology: Human Perception and Performance, 16, 459-478.

Shimp, C. P., Herbranson, W. T., & Fremouw, T. (2001). Avian visual attention in science and culture. In R. G. Cook (Ed.), Avian visual cognition [On-line]. Available: pigeon.psy.tufts.edu/avc/shimp/

Sutton, J. E., & Roberts, W. A. (2001). Attentional processes in compound stimulus processing by pigeons.  In R. G. Cook  (Ed.), Avian visual cognition [On-line]. Available: pigeon.psy.tufts.edu/avc/sutton/

Spetch, M. L., & Edwards, C. A. (1988). Pigeons', (Columba livia), use of global and local cues for spatial memory. Animal Behaviour, 36, 293-296. 

Sutton, J. E., & Roberts, W. A. (2001). Attentional processes in compound stimulus processing by pigeons. In R. G. Cook, (Ed.), Avian Visual Cognition. [On-line]. Available: pigeon.psy.tufts.edu/avc/sutton/

Uttal, W. R. (1976). Visual spatial interactions between dotted line segments. Vision Research, 16, 581-586.

Wasserman, E. A., Kirkpatrick-Steger, K., Van Hamme, L. J., & Biederman, I. (1993). Pigeons are sensitive to the spatial organization of complex visual stimuli. Psychological Science, 4, 336-341.

Weisstein, N. & Harris, C. S. (1974). The visual detection of line segments: An object superiority effect. Science, 186, 752-755.

Wolfe, J.M., Cave, K. R., & Franzel, S. L. (1989). Guided search: an alternative to the feature-integration model for visual search. Journal of Experimental Psychology: Human Perception and Performance, 15, 419-433.

Zeigler, H. P. & Bischof, H. J. (1993). Vision, brain, and behavior in birds. Cambridge, MA: MIT Press.


Acknowledgements

I would like to thank all of the collaborators, Jeff Katz, Kim Cavoto, Brian Cavoto, Aaron Blaisdell, Traci Tauro, Robert Shaw, who have been involved in these various projects over the years. It has always been great fun. I am also thankful to the National Science Foundation for supporting my research in avian visual cognition over this time. I also want to thank Aaron Blaisdell, Peter Urcuioli, and Shelley Roberts for their comments on earlier drafts of this chapter.