I am in the process of writing-up/implementing a scanning audit for our aquatic staff.  We are an American Red Cross facility and we currently do an on-stand professionalism evaluation (posture, bottom scan, attentive, uniform, etc.) but do not have a tool to test what THEY REALLY SEE.  I am aware that E&A have a scanning audit standard but my question is, what do YOU do for testing your staff's ability to scan/recognize what is in their zone?  What kind of drills/audits do you do for this (red ball/silhouettes /Timmy/etc.)?  What
information is written in you SOP regarding this?  What is your disciplinary
action for a failed audit?  Anything else that would help me write this up?  Any procedural write-up would be helpful but not necessary; I am just getting a feel for what everyone else is doing. 

Thanks for your time,

Views: 5414

Reply to This

Replies to This Discussion

Hi everyone, I agree with Jim W. that if you can get Timmy in the water with out one of your guards noticing it, that would be a good trick, since many of us try to emphasis on some kind of scanning technique to cover all the water in each zone.

My priorities for my staff is not to worry about looking for a doll at the bottom of the pool, but looking at the type of activities and patrons you have in the water and set you zone coverage according.and then try teach your staff to be in a preventative mode. Have them walk up to a child and explain what may happen if they swim in this area? explain to a parent why they need to be in the water with their child vs sitting on the louge chair reading a book and etc....

Because if you find a body on the bottom of a pool that to late.... WAY TO LATE !!!!


I think I understand what you’re trying to convey. However, you misconstrue the nature of an audit. In my estimation, an audit is not intended to be a training tool, the attribute you accord it, but is an evaluation tool to ensure transfer of learning. Therefore, it is not intended, as you describe it, for lifeguards to look for a “doll on the bottom of the pool.” If this is your take on the value of audits, I understand your reservations.

But let me be clear that audits are intended to be evaluative of lifeguard performance. There is disagreement that an audit can be fashioned in a manner to make the evaluation valid, although I don’t necessarily see this as problem. The audit works under the general theory that lifeguards must exercise vigilance against anomalies in the water. This means lifeguards must be trained to respond to shapes and shadows that are abnormal to the particulars. This means an audit does not train lifeguards to look for dolls in the pool, but merely uses such proxies to measure recognition response since lifeguards should be vigilant against waterborne anomalies: If there is something down there that shouldn’t be there, lifeguards must be sufficiently vigilant to recognize it as such.

In passing I must also add, that just because a body is on the bottom of the pool, it does not mean it is too late since we may not know how long the body’s been down there (it could be 10 seconds, minutes, hours, or days), but it is only too late when it scores as a morbidity/mortality statistic.

As I have mentioned before, such audits are NOT an accurate measure of vigilance since top-down programming may or may not be inclusive of the minute (or major) differences between the false proxy and the real victim. This lifeguard may notice the false proxy late since it is not equal to the signs of the real victim.

If the lifeguard is focusing on finding a proxy to the exclusion of real victims, the proxy may be noticed more readily than a real victim, which is not desirable, but a condition that is at least possible and unable for you to control or anticipate. This could, for example, be the lifeguard who failed this audit before and is determined not to lose his/her job.

Somewhere in the middle are the majority of lifeguards who will notice the proxy in a timely manner either because of its sudden appearance (bottom-up stimuli) or because his/her programming includes both false and true top-down controls. 

Up to now, I have assumed that all lifeguards are attentive with difference internal programming. There is also the possibility that a lifeguard is inattentive at the time of the audit. This would result in a delayed response as well.

In the end, when you evaluate the lifeguard based on time from appearance to recognition, you have no way of knowing which lifeguard you are evaluating, hence the evaluation is unfair and your conclusion potentially flawed. Regardless of the lifeguard's attention programming and degree of attentiveness (which you cannot know), this type of audit is an intrusion as I have mentioned before many times since it imposes a test that requires the lifeguard to either look for or discover false positives during a time when the lifeguard should be focusing on finding real victims.

Again, I would like to say, for the record, that the American Red Cross does not perform these kinds of audits during its evaluation of lifeguards. Rather, it tests lifeguards by removing them from their primary duties and ensures the facility is adequately covered by other lifeguards during testing.

Any rebuttal (including those in the past contained herein) that expresses what a lifeguard should be able to discover while scanning is ill conceived or is faulty because it minimizes the importance of top-down controls especially with experienced lifeguards. These rebuttals also discount the potential negative effects of imposing intrusions of secondary testing/training where primary responsibilities are being carried out.


In a nutshell, as far as I can tell, your argument that audits are “not an accurate measure of vigilance” relies on the unpersuasive argument that drowning episodes cannot be replicated. It suggests drowning episodes generate signal qualia (sense data) that are so unique as to be unassailable. This putative difference, the argument goes, creates a signal schema that forces lifeguards into the dual function: vigilance for drowning and vigilance for audits. Again this argument hangs nicely if you buy into this notion that drowning signals cannot be replicated.


As you can guess, I’m not buying this since lifeguards must be trained to respond to anomalies (shadows, blots on the bottom of the pool and the like) since drowning victims are sometimes initially characterized merely as anomalies, with tragic consequences.


Therefore the argument that lifeguards focus on proxies at the exclusion of “real victims” is a red herring as this implies an incompatibility between “real” and “proxy” victims.


You also state that a proxy is not equal to the signs of a real victim. In what way are they not “equal” in the following situation:


A lap swimmer after several laps stops dead in the water, still facedown and floats.


If this is a drill to measure lifeguard response, then how is this different from someone succumbing to sudden illness such as petit mol seizure or heart attack? What are top-down or bottom-up controls to make of this?


The idea that you cannot replicate a drowning scenario does not square with my experience.

They are different and you know it. Are you trying to tell me that the lifeguard is not aware that he or she is responding to an audit drill/test when he/she discovers a proxy victim? Of course, he or she knows it is false; I have seen every type of adit proxy from balls, to shirts, to 2-dimensional figures, to dolls, to people pretending/simulating a distress/drowning condition. Do any of these look like a real victim? NO! None do.

If a lifeguard is correctly trained, the lifeguard knows the difference between a real body on the bottom and something you place there that is not a body. In all my years of lifeguarding, I have never entered the water to rescue a shadow or a piece of trash or a little rubber ball. Not even for a drowning manikin. I have always used by observational skills to ascertain that I had a real victim who really needed my help. And I never missed anyone nor failed to help any patron in need.

Actually, my case is stronger for the intrusive nature of these audits if you have found a way to replicate a drowning scenario to the point that the lifeguard cannot tell momentarily if it is real or not. This is pure intrusion and wholly inappropriate. Find other ways to test your lifeguards.

As a head lifeguard and aquatic coordinator, I used to watch lifeguards to determine how effectively they were enforcing rules and maintaining control of the swimming area. Using this method, I could determine whether lifeguard loads were too great, whether any lifeguard was not scanning effectively, etc. This is a superior method to determine the attentiveness of lifeguards because it evaluates them on something that is actively a part of their primary responsibility and it is nonintrusive. Additionally, it does not require the lifeguard to vacate his/her station without good reason and it does not subject the facility to a false emergency procedure.

So, I would say this test is real, nonintrusive, and noninvasive.

To answer your question, Joe, if you use a real lap swimmer to suddenly stop dead face down and float, how motionless is he? How long can he remain motionless? How realistic is his collapse? As I mentioned earlier, even if this simulation is flawless, this is an intrusion that causes the lifeguard to respond to a false emergency, taking him/her and other lifeguards from their primary responsibilities while simulating this fake emergency to somehow placate the chain of command.

By the way, I would give a higher grade to any of your lifeguards who knows that the "victim" is faking it and does not enter the water. In my day, we excluded kids who pretended to be drowning because we had no time for such foolishness while seeing to our primary responsibilities.

Also, what would you do if one of your lifeguards refused to participate in any false proxy "rescues" on the basis of the RID factor? Would that be grounds for termination?

So how would a properly trained lifeguard respond to the motionless swimmer I describe in my post? I don’t know of any training that boasts making possible an unequivocal differentiation between an audit and a drowning under these circumstances.

That aside, my point is that lifeguards must be trained to recognize anomalies because there is an insidious psychological need for rescuers to want to interpret anomalies as innocuous because anything other than this is too devastating to contemplate. It is in our nature to seek plausible deniability.

The rescue component, as I’ve stated in previous postings, is not material to my view of the function of an audit, and therefore does not require the lifeguard to leave his station. To suggest that audits are conducted to placate chain of command is a wholesale indictment in motivation that is not supported by the typical operator. In my view audits are intended to measure training effectiveness, and this is not incompatible with any strategy of observation.

The argument that audits generate intrusion has been discussed ad nauseum, and as I’ve already made the case, all intrusions are not created equal. To suggest that audits pose an unacceptable risk cannot be statistically proven, nor is it supported by contemporary risk management analysis.

As it relates to what to do with lifeguards refusing to participate in the audit, I would let them know from the onset there is management expectation for participation and failing to participate will affect their at-will employment status with the organization.

But you still fail to answer my original questions: What is the difference in the qualia between a motionless floater who is performing an audit and a passive drowning victim? What are top-down or bottom-up controls to make of this (as a training tool)? Because if there is no distinction, we have succeeded in doing something you deny: creating a proxy drowning scenario that will not force a signal split that forces the lifeguard to be vigilant for drowning and vigilant for proxies.

Finally, I think this discussion has become too erudite and abstract as a practical matter. I think Dewey Case is being reasonable in his general assessment of this topic. I do not think repeating one’s position necessarily advances understanding. I think I will probably limit my responses to fresh ideas and forego these rutted paths.


Joe: That is interesting. I was going to say the same thing: Simply stating that all intrusions are not created equal does not make it so. Also, as I have mentioned many times before, the adverse effect of intrusions in general cannot be measured but we know that it can be a negative factor.

Additionally, you ignore the main point that lifeguards informed of audits already must split their vigilance between looking for real emergencies and false ones.

Joe, if you use actors to portray victims, are you saying that all other proxies are not valid since they produce visible signals less like the real thing? Your statement that lifeguards must be trained to look for anomalies (shadows, gray spots, trash, etc.) is only partially true; lifeguard must eliminate such anomalies as victims/emergencies using observation and victim-recognition skills.

Even using actors, a real victim cannot be completely and accurately replicated. As lifeguards gain more experience, they should become less and less fooled by such acts. Actors faking drowning victims sactioned by aquatic management is not a good thing. Sorry.

Good comment.

I realise I come to this fascinating thread rather late, that much of what can be said has been said, and apparently no one has any intention of changing their mind on this subject. However, I wanted to share my own experience, in case it is of interest.

I manage a pool facility in Australia. For a number of years we used the ‘mannequin test’ in which a mannequin was discreetly placed in the pool and lifeguards were timed to see how long it would take them to spot and retrieve it.

Then something happened which led to me to reassessing this strategy. One day I overheard two lifeguards in the lunchroom having a conversation. I can almost recall what was said word-for-word: “I probably spend half my time watching the supervisor to see if he’ll put the mannequin in.”

Rather alarmed, I interviewed a number of lifeguards and discovered that as soon as a new guard was hired, they were informed by more experienced guards of the mannequin test, and key things to watch for to ensure they weren’t caught out. These included watching supervisors walking towards the supply room, supervisors walking around the pool, supervisors looking at their watches.

Unintentionally I had created a scenario in which my guards were focusing at least some of their attention away from the pool or on completely the wrong signals.

So I’m afraid I have to side with Ron on this one. We no longer use the mannequin or any sort of ‘red ball’ audit at my facility. What I discovered was ‘red ball’ audits mostly proved lifeguards knew how to look out for mannequins (often in cunning, but not helpful from a safety perspective, ways), not how to prevent drowning. Instead we put our focus on teaching and monitoring effective scanning strategies (including the bottom of the pool), distress/drowning recognition, preventing people from hyperventilating etc.

Again, this is just my experience, but I wouldn’t be at all surprised if guards at other facilities have developed methods of watching for signals of the ‘red ball’, instead of watching for swimmers in trouble.  


© 2018   Created by AI Connect.   Powered by

Badges  |  Report an Issue  |  Terms of Service