Wednesday, March 05, 2008

Tasker Issues (#1,912)

Archived from the former firedocs blog. 07 March 2006



Unless a tasker is an investigator or scientist (and maybe even then), when it comes to "social viewing" (in groups), it helps if a tasker is also a serious viewer. As a viewer, a person understands and relates to the freedom that must be had, internally. The more someone views, the more laid back they tend to be about the results that other people have when tasked by them.


The less a tasker views, the more dangerous the 'control' position of tasker can be. Since avoidance of personal viewing and a desperate need for control often go hand in hand (for psychological reasons I won't get into here), people who really want to task but really don't view should probably be avoided.


Next thing you know they'll be wanting to "analyze" session data---as if most people really know anything at all about either (a) analysis, or (b) remote viewing data, let alone (c) the two combined. This usually comes down to "subjective evaluation" of data, which anybody can do... and which, outside of applications, the viewer should be doing for themselves. Any moron can tell that their data is accurate, wrong, or the various options between or unknown. But only the viewer knows how it felt inside, the way it 'came in' to them, how that feeling relates to feelings about other data, etc. Other viewers can helpfully point out things they see in a session (such as is done in TKR's Remote Viewing Galleries), but the point of evaluation is in the hands of the viewer.


The less specific the tasking and the less clear the feedback (a common combination in social viewing), the more people seem to think analysis is necessary. I don't argue that it is more or less necessary; I argue that it has a point at all in that case. If you can't be specific in your tasking, you don't want a remote viewing session, you want a big glob of data you can sift through for what you think matches or is probably so. You might as well let the analyst write the result they want and save the viewer time.


Now, if you're just curious and you're tasking/viewing for fun, then sure, you can size/shape/arrange taskings in whatever way makes your heart happy. But in that case, I wouldn't expect anybody to be "critically evaluating" much of anything.


In social viewing (by that I mean, viewing which is not specific to applications and is shared among people), ideas about analysis inevitably lead to someone in an armchair deciding what they think is good vs. not. In my view this is little more than a game for the person in the armchair. If the tasking and feedback are clear, the viewer themselves can do that just fine. If they are not, nobody is fit to do it. And if anybody was, it would be the viewer, with tasking context, not anybody else.


Nothing is more exasperating than watching someone assign a tasking the width of a horizon, with feedback that barely exists if at all on the focus, and then they want to decide what viewers did good vs. poorly and where. In a social sense this almost inevitably, eventually, will lead to some other factor starting to take the lead for determining who-is-most-ok, such as shared belief systems or methodologies. Eventually you have cliques that aren't clicking, unless your viewers are all submissives. Which the best viewers usually quite pointedly are not.

No comments: