Thursday, March 06, 2008

Ack!--uracy and Viewer Development

Archived from the former firedocs blog. 01 May 2006



You never count your money
When you're sittin' at the table
There'll be time enough for countin'
When the dealin's done.

I never met a viewer who didn't groove on being right. Whoa, won'tcha gimme that gut-level, spine-climbing euphoria of yes yes! Yes! YES! Yeeeehaw! when there's a target rocking inside you and the feedback hits outside you and the connection loop rushes through your body and makes ya feel bubbly-stoked inside for days (not to mention walking around grinning like an idiot for awhile right after).


Of course, every viewer learns the hard way that a hard-hit often equals a hard-punch to the ego on the next round. As I say, if there is one thing consistent about RV, it's the damnable inconsistency. Accuracy... is really just two, four letter words.


o~0~o


Many of the confusions in layman's RV about best-practice processes, stem from the fact that what is good for viewer development, is not always the same thing that is ideal for more developed viewers.


What you often hear talked about is what is 'ideal' -- like, if you were an expert and you were working an operational target, it should be done like XYZ. Or, if you were working at that level in science or applications over a long period of time, we would expect things to be done like so. Viewer profiling, for example. Session analysis. Or the many measures of "accuracy" that can arise. But the reality is that most people aren't experts and aren't working operational targets.


The most important goal going into a session is target contact. Not only for inside the session, for accurate data, but inside yourself, for the learning purposes during and after session.


You don't just learn from the outside; you also learn from the inside.


The target inside you has plenty to teach, not only in the session but afterwards in review, in dreams, and later -- days, weeks, months, years -- in more review.


Accurate data that you don't feel, and that comes through plainly, may not provide the same degree of learning as data that you have an actual "sense" for, even if it's wrong----at least if on feedback, you feel you know what went wrong with the accuracy.


Target contact is not really defined by what you "feel"----that does not always correlate with accuracy (and protocol violations like someone 'informed' in the room can increase the "kinesthetics" of a session)----but in an informal sense, it sure seems like it to me and many viewers I know. I do notice that sessions with a strong "feel" which are not accurate, at least in my case, are inclined to be completely off target. A really solid feeling of contact, for me, usually either means I was well on target, or well off target, or seriously in AOL drive, but it's likely to be one of those three; no casual conglomerate in the middle.


The "feel" of target contact can be so seductive, that many viewers would rather have a "so-so" session they really "experienced," than a great session they didn't really feel at all. The "feel" is the "fun" of Remote Viewing... the drug that'll get ya. Most the rest is a mental exercise.


There is a sense of responsibility and intimacy entwined with the "feel" of target contact. When you feel the target, it is your session, as if it's a work of art. It is your target, as if it's a friend----even if on some level, the target is horrid. It's personal. And it's permanent. You'll not forget it.


And sometimes, two full seconds of feeling solid target contact can give you more accurate, conceptual, relational, and sketchable information to go back and dig out of memory than another hour in session without that.


There is no middle ground on the subject of target contact. Without it you have no session at all. With total contact, you have what some call "full rapport" or "bilocation." 99.99% of all viewing is of course, somewhere between those two extremes.


A primary goal during the psychic 'experience' is to get as much target contact as you can without overwhelming yourself. That IS the information, is that "meeting in the middle," that intimate merge of you+target.


o~0~o


The ability to make target contact is what some scientists say doesn't change. A viewer's ability to get more data, better data, more advanced data, all those can be brought out through practice. But "how often, out of 500 targets, a viewer is likely to have clear target contact" is the variable that does not seem to change--not with all the viewers they've tracked, sometimes for decades.


Novice viewers, and those who work within systems that use wide-scope taskings and a lot of inferred sorts of feedback, may tend to feel that they are nearly always "on target," and it's only the details that vary. (By some standards, we are all "nearly omniscient.")


Research suggests that about 30% of the data presented in any session can be applied to about 30% of the possible targets. What this means is that a lot of viewers probably consider sessions "on target" that have "some accurate data" when really, the data is there as much by chance or a couple tiny 'spots of clue', than by any actual, decent, psychic target-contact.


If you "amplify" that effect by bringing in formal psi methodologies, this chance factor is raised even more. Some sessions have so much info in them, simply because the structure of the methodology requires the viewer record something, that they apply quite well to about 82% of all possible targets on Earth. Pretty hard to miss with that.


o~0~o


It's pretty difficult for a viewer to really clearly see when they are "solidly on or off target" until they start becoming solidly on or off target. Target contact isn't always strong, especially for novice viewers. There is plenty of wandering, guessing, incredibly ephemerally nebulous hoping going on in early sessions.


The more the viewer develops, the more they start to "feel" target contact. The more they start to feel it, the more specific they tend to be in their data. This means when they are on-target, it is not just a matter of low-level data having several matches; it is really obvious, that they are really "on-target".


And when they're off-target, it is just as obvious!


This often makes good viewers more insecure about showing their data than the average novice. (And not just because they have 'more to lose' with peers.) When a novice viewer misses a target, they're likely to have enough wishy-washy, 'broadly applicable' low-level data that it's a pretty subtle thing; one can probably stretch a few basic descriptives into 'possible' matches. But when a developed viewer misses a target, it's very likely that they are so completely off that they're going to be totally humiliated by it.


o~0~o


Concerning accuracy, the first basic is----the basics. You can read one of the Firedocs Remote Viewing Collection "FAQ" entries for info about accuracy. I give a few examples of different ways of measuring it, and point out that numbers are nearly always used to obfuscate in this field and you can't take anything seriously unless you know the protocol and know the measure.


There is another issue that only indirectly has to do with accuracy, but directly has everything to do with the viewer, which goes back to skill and hence accuracy (from the other direction). That is:


How you measure accuracy, should be greatly dependent on when you measure accuracy.


There are three basic kinds of when in my example:



  1. When you are new to RV, or, when you are simply working on your practice, your development "in general" and ongoing;

  2. When you are well into a viewer development cycle, getting good data fairly regularly, feel a sense of target contact pretty regularly, and want to start closing-in on specifically planning your practice around your skills (or lack thereof in some areas);

  3. When you are fairly experienced, and getting closer to a skill level that would make applications workable, and would make an actual measure of your skillset needed.


And there is another, even-more-important kind of when:


After the session, vs. separately from the general viewing process.


o~0~o


Most viewers at point 1 are not feeling solid target contact regularly. They are still going through so many issues related to both target contact and communication, that you simply cannot take their data as a good example of anything except "their learning process." If they get a gender or a color wrong, it may not be because their target contact was ok but they weren't accurate; it's just as likely they either didn't have very clear contact to begin with, or that even they did, they may have so many other issues that what ends up on paper just isn't a real good example of what was inside them anyway. Yes, that's what we're learning, but on the early side of viewer development, it's a pretty nebulous process to begin.


A viewer at point 2 might be a viewer with a good deal of experience, who is expected to make decent target contact, but who wishes to review his data from multiple sessions, and consider what "type" of data is more inclined to be accurate vs. wrong. (In this example, we'll use the typical viewer profile database as an example: data is broken into components, noted as right/wrong/other-unk, and you end up with how 'much' of a certain type of data the viewer got, and how 'accurate' that type of data is for that viewer... and this combines over multiple sessions until you have something of an average, a curve.)


Aside from the variables in a session like analytical interference, for the most part the viewer has some kind of "process" down for their viewing. If they translate a certain kind of data wrongly, it may well be that their translation needs work, or that they need more experience on that kind of data. This kind of accuracy gauge (viewer profiling) can show you that, and you can begin gearing the taskings for the viewer toward data they need more experience with; and begin upgrading the complexity of tasking when working with data types they have a lot more fluency in.


Then there is the "how" it is measured.


As a first basic, all viewers no matter what their skill, if they're practicing, ought to have time for a session review. But to get to the more formal measures:


A viewer at point 3, would be similar to point 2 except that their evaluation might be better geared to a far more "specific" set of parameters. For example, let us take working on practice targets with photo feedback as the example, as it's the clearest: in point 2, the viewers are working on the focus of their feedback, which is a photo. Let us say that there is a church with a painted roofing and several people outside and some steps with railing and blue sky and some trees on the edge. Whatever data they get that matches that, is going to be accurate.


At point 3, the practice of the viewer should get more specific: whether a local live-feedback-as-target or a photo-feedback, bring the aperture down to something very specific. For example, the steps and the railing. That is the focus. You can remove the other info from the feedback or not, as you wish, but I suggest removing it. The viewer is then being judged on whether they accurately acquire very specific information. The range of "chance and accident" go down drastically at this point. We are no longer asking the viewer to describe an entire location and everything in it, which as anybody knows once they start evaluating sessions, is a lot more possibility for data than most folks think. We are now judging ONLY based on certain very key and specific information. This is going to greatly change the "accuracy percentage" resulting, even with the same viewer using the same accuracy measure----because the target's scope, the viewer's "aperture", narrowed dramatically.


o~0~o


The most important part of any discussion about accuracy needs to educate new viewers about this:


A practice session is a two-part process. The first part is the session. The second part is the session review. They go together.


"Session Review"


When a viewer finishes a session, the appropriate thing for him to be doing at that moment is looking at feedback, as soon as possible; concentrating on it, as intently as possible; and I recommend, mellowing out a little, and attempting to "get in rapport with it" to the extent possible (yes, even though you know what it is---of course). He should go through his session and attempt to revivify or remember-clearly, what the experience FELT like when a given piece of data came through. He should look at the feedback, not to see if "there is any match anywhere" intellectually, but to see if he can FEEL why that information piece came across, what it might relate to, and if it's not accurate, what feeling was misunderstood or ignored, that resulted in the error.


He should do this for his entire session. He should take his time. A practice session has two parts: the session itself, and the session review. That review needs to be in just as much a psychic, receptive state of mind as the session. Viewers can learned as much or more from session reviews than in-session, and that's the normal way of it, since that's when you have feedback, and you can try and make sense of the many subtle senses that came across and you had no idea how to articulate or what to do with. The psi and the receptivity should still be going on, in an ideal framework.


o~0~o


So let's go back to: How you measure accuracy, should be greatly dependent on when you measure accuracy.


Under no condition would I ever, ever, suggest that any viewer, especially a novice viewer, start databasing their sessions the minute they start viewing. (Just because someone teaches it to you, doesn't mean you have to start using it right away.)


First, as noted above, they are so much in a flux-learning state that expecting any data they get or don't get to have some profound significance is kind of beside the point. They just need to view. They aren't consistent enough even in the viewing process for the cumulative-session info to have a point. It's like taking five pieces of nearly-random data. When you combine them all, all you have is one piece of nearly-random data that is five times as big.


Second, remote viewing takes time, and since most humans have jobs and family (aka "a life"), it's important the time they DO have be spent doing something useful. Down the road, when their process and data flow is more regular, they can step into more detailed, multi-session evaluation. Initially, it's just a bunch of time distracted from actual viewing and related processes.


IMPORTANT NOTE: Math is not a related process to viewing. The 'session review' goes through each data point in the way it should be done. Finishing the session and then, instead of that session review, launching into a run-through as viewers excitedly tally up their 'points' and do the math for it all to come up with some number that makes them feel happy, is a complete distraction from one of the most critical learning-potential moments of the process.


Third, there is a strong tendency for left-brain types to be drawn to remote viewing, and this is a little too Kether----by that I mean, after-session is a moment where they really need to be focusing on the session, feedback, and allowing it----not distracting themselves into math, which the most left-brain types will LOVE doing, until you find them spending 10 minutes on a session and an hour on their "accuracy worksheet" and eventually you start realizing they're spending more time on math than viewing, and more time focused on numbers rather than process.


Fourth, novice viewers are still working on the first basics; they are not yet to a point where they have a "sense of self" as a viewer, a little confidence from a sufficient amount of past success, and very importantly, a sense of what matters about a given practice task. When you dump these viewers into instant-accuracy evaluation, it can distort the data collection process from that point on.


The viewer may first began to "pad" their results. If they say "shiny," they will also decide they should say "light, bright, gleaming, reflective...". Golly, look at all those extra data points... and so accurate! (I've yet to see many targets in which you cannot, by some grace, find a way to consider data like that accurate.)


Then, the viewer may start to CYA against possible inaccuracy. They may start to say, "a man" because they sense a man, and instead they back up and say, "biological. human. male." and so on. Ding! More data points! And, a greater "span" of data which better ensures something might be correct (and less thread of incorrect, or at least as MUCH incorrect).


(In order to explain why perceiving a man is not an analytical "overlay" construct unless you made other data into "man", I'd have to devolve into a talk about AOL. Not this post. Some other time.)


When "accuracy judgement" (not to be confused with the every-data-point "session review" noted above) is tied to novice viewers, it becomes an albatross of attitude and it begins training them literally toward the wrong focus. The focus is not how many data points you get; the focus is not how many of those points are right; that is a math game, not viewing!


The focus is always Target Contact. Everything else follows from there.


o~0~o


The first viewer-measure of accuracy is the viewer's own session review. That review-point is literally the whole point of doing a practice session. To lead to that! Data is only the point of the session when you are working for science, show or application. Learning is the point of the session when you are practicing.


So WHEN is any "accuracy measure," outside the given of a session-review, appropriate?


1. When the viewer has become consistent enough that taking the time to put multiple session results together in a database, for an average, will have some actual meaning.


2. When the viewer has enough experience to have some sense-of-self as a viewer, and will not be distracted by the measuring process in a way that distorts their later viewing process.


3. When the viewer is solid enough in the creative and flexible viewing process that they aren't vulnerable to using math as a shield----a process they feel far more confident about than viewing.


And WHEN should such "accuracy measure" be carried out?


Any time that is NOT CONNECTED TO the viewer's session process.


If performed after a session, even after a session review, there is a high tendency for the process to speed up to move toward the end-goal, which in this point instead of being the review, becomes the math.


If performed wholly separately as a process, this provides the added benefit of a totally separate time and state of mind that may see something in the data/session that you didn't see initially.


o~0~o


So next time you find yourself in a discussion about accuracy, bear in mind the options. Are the numbers for application purposes? Are the numbers for viewer-development purposes? Or are the numbers being collected on novice viewers where they take more time to collect than the result is even worth?


Numbers at the intermediate level can be collected, but shouldn't be considered to mean a lot for "viewer comparison" reasons given the scope of the targets. They can mean something for viewer development, of course.


Numbers at the advanced level should only be considered in the light of highly specific targets, preferably with tasking that clearly requests certain information. So you are really only measuring whether or not a viewer got very specific information. Those sorts of numbers, you can compare.


Target contact matters more than numbers. Focus on the target contact and the numbers will take care of themselves.

No comments: