After two years or focussed ARV research, over a decade of sporadic ARV participation, and of over twenty four years of combine RV project management and participation, I have recently created a few thoughts, which are presented here.
ARV, for some reason or as I may show (many reasons) is less accurate that a traditional or normal Remote Viewing target. For standard remote viewing target, I myself, and others I work with, with some regularity achieve accuracies of well over 75%. These are of targets that are usually present or past targets.
There seems to be something within the process of doing ARV targets that seems to affect the overall feel and accuracy in a negative way bringing the accuracy down to something like 55-68% (this is my approximate and is not based on data analysis, but on my research and discussions). BUT clearly everyone seems to see/have less accuracy on ARV style RV projects.
For two years I have run a series of ARV/Unitary ARV projects for Cryptoviewing as a tasker and project manager and here are some thoughts that I have formulated.
As with all remote viewing, we still do not know how it works. Therefore, imo, anything has the potential to influence the data and hence its accuracy. I have gathered and I feel, shown, that there seems to be a communication channel between taskers and viewers. Shown in projects whereby the target only exists in the mind of the tasker – yet still can be accurately recorded by remote viewers. But there are many other factors that I feel CAN hinder any remote viewing project but probably more so on ARV style targets that involve typically both the future/forecasting, money and invested intent from people involved.
As an example of my thinking I will use two projects I ran from March 2019 – (ongoing) to help diagram my thoughts.
Public ARV project1 – to predict the outcome of the U.S elections in November, 2020.
Start – March 2019. End – December 2020.
In March, 2019. I ran a public ARV project through my Facebook group. I was the tasker and project manger, the seven viewers knew nothing about the project.
The project was a binary ARV to determine if Donald Trump would be re-elected U.S. President. The target was set for over nineteen months in the future and the viewers were to sketch and describe the image given as feedback – only.
I, first analysed the data, I had full knowledge of the target, which image was appointed to what outcome. I also asked colleagues Jon Knowles and Tunde Atunrase (both seasoned ARV/RV experts) to also BLINDLY review the data and to match the data to one of two images – they had no knowledge of what images represented what outcome.
All three of us selected the same outcome target image as a match. This image represented the B target in the binary set – to be shown as FEEDBACK if Donald Trump WAS re-elected.
Although still in play – it’s looking more likely each day that this is not going to be the case and the Biden is going to be the newly elected next U.S. President. This was the A target in the binary set. So what went wrong?
On reviewing this project I can see several main factors that May have caused the inaccuracy:
- Project setup
- Time to the event
- Intent/over time
- Errors in the analysis
So let’s look at these.
First, the project setup. On reviewing I can see no real issues that would have caused viewers to report more of target B than A. I was careful to select two targets of the same approximate: age, size, form/function and interest. On reviewing, I have to admit that target B DID have a slightly more interesting shape/form than A, but I don’t believe that this alone is enough to cause any major displacement. The actual target cue was good for both of these:
- A – The remote viewer is to move to the optimum position/location to describe the ACTUAL structure focussed upon in the feedback image if Donald Trump is NOT re-elected president and this target is given as feedback. ONLY
- B – The remote viewer is to move to the optimum position/ location to describe the ACTUAL structure focussed upon in the feedback image if Donald Trump IS re-elected president and this target is given as feedback. ONLY.
POINT 1 – Setup I feel is OK.
Next, is Time to the event. Now, in this first project the time to the actual event and feedback was twenty two months. (It is common thought within the RV/ARV community that the further out a prediction is from the predictive event, the less accurate it seems to be. This is based on the theory that over time and moving closer to events, the options for it decrease coalescing over time into a single route.) I’m not sure I have read any scientific projects that validate this theory, but it is common thought. As this projects was quite some time away form the prediction event, IF the theory holds true, then this would impact the accuracy of the remote viewing data – So this COULD have been an effect.
Point 3 – Intent /over time.
Now, this is a complex part of RV. Intent. It’s known that the intent of the people involved in the project, especially the: Client, tasker, project manager, analysts and viewers, CAN have an effect on the results and the data presented. Its known within Remote Viewing research that a level of telepathic communication CAN possibly be involved.
In this project I was the client, tasker, project manager and one of the analysts. My intent on this project is known and can be computed. In March 2019, I did not like TRUMP, did not want him to be re-elected. Therefore at this date, IF my intent were to influence the final RV data it would have created an A target selection – Trump NOT to be re-elected.
But hold on there – it gets more complicated than this.
My intent over those 22 months dramatically changed. This is for two main reasons. The first is that I did a second ARV project for a client: Cryptoviewing. And Secondly Both projects were public, so over time I had a personal interest for the predictions to be correct to validate themselves for myself and to satisfy the client: Cryptoviewing.
I think it’s safe to say that in November 2020, My intent had now morphed into one somewhat schizophrenic in that I still did not like Trump, but that I also had a need for the two public predictions to be correct predictions. Over time my intent had dramatically shifted – this has to be listed as a potential cause of an effect on the remote viewing data. Especially as I was in this first project: client, tasker, project manager, and part analysts. If this is the case though, then it has to be conceded that my future intent MAY have influenced past data from the viewers.
This leads to Point 4 – A miss in the analysis. In this project I knew the targets and analysed it knowing this and I chose the B target as the best choice. My analysis shows that although there was some displacement in three of the seven viewers, three were also clearly B target descriptions and only one viewer outright seems to be describing the A target.
The second person to analyse the RV sessions was Jon Knowles, Jon is a very knowledgeable person in Remote viewing, one who has spent well over a decade looking at ARV. Jon did NOT know what target represented what outcome – he was BLIND. Jon’s analysis was: “Three passes and four sessions favoring B suggests a moderate to strong pick for B.”
The third analyst – Tunde Atunrase, also a very knowledgeable person and long time practitioner of ARV projects with great successes. Tunde also reviewed the ARV data and was blind as to which represented what outcome. Tunde reported: “For me the overwhelming favorite is the Atomium structure in Brussels B Target”.
In conclusion, one unblind analyst and two blind analyst ALL picked the B target – a TRUMP re-election as the prediction. I can’t find any issues of bias in the analysis of this project.
Therefore the main factors that I feel are the cause of this ARV inaccuracy must be:
- Project setup ✔
- Time to the event ✖
- Intent over time ✖
- Errors in the analysis ✔
My thoughts on this…
On many thoughts about this and the other ARV projects I have worked in and managed I have come up with this structure that may both help explain where things go awry, but also may be used to calculate future probabilities for the accuracy.
Co factors.
This is how I’m rating this and the cofactors I feel MAY influence each stage of the RV/ARV process.
- C – Client intent
- T – Tasker intent
- Pm – Project manager intent
- Ps– Project setup (numinosity and values)
- V – Viewers intent
- F – Feedback
- Ti – Time
- S – Social
- Fi – Future Intent
- Fa – Fatigue
I’m giving each of these either a 1 or 0 rating, 0 being unbiased, 1 being biased or influenced.
I feel the best case scenario score would be O, but in this case a March score was 20
It could be more but viewers score is unknown.
In November this very much changed to be even more negative to 24. More on why later.
I feel this Facebook ARV project had this algorithm.
March 2019
C0 + T0 + Pm0 + Ps0 + V? + F0 + Ti22 + S1 + Fi0 + Fa0 = 23
November 2020
C 1 + T1 + Pm1 + Ps0 + V? + F1 + Ti22 + S1+ Fi1+ Fa0 = 28
So first:
C – Client. The client has an intent, an expectation and a want from the project, this will have an effect. In this case above the client was myself. I am not a U.S citizen and my personal intent at this stage was to know, my thoughts on Trump, (at the start of the project were that I didn’t really like him – but I had no investment either way. This of course VERY much changed in the later months.
So;
C0 +
T- Tasker.
I was also the tasker of the target and again my intent and/or influence in March was imo, 0. So:
C0 + T0 +
Pm – Project Manager.
Again. I was also the Pm of the target and again my intent and/or influence in March was imo, 0. So:
C0 + T0 + Pm0 +
V –Viewers.
We had seven remote viewers in this project and we do not know their thoughts on this project in March and they were blind to the target at this stage, so I feel it’s safe to give this a score of 0.
C0 + T0 + Pm0 + V0 +
F-Feedback.
This is a target that WILL have (imo) real/solid feedback, so I gave this a March score of 0.
C0 + T0 + Pm0 + V0 + F0 +
Ti – Time.
Now this is a calculation based on how far in months between the target time and the viewing time. It seems that targets further into the future MAY have more probabilities or possibilities that MAY lessen the closer the viewing is to the target time. So, in this case I scored 1 per month between viewing and target time. Imo, targets within a month or so, seem to be way more accurate that far-out predictions. In Cryptoviewing our monthly predictions of the next thirty+ days seems to be scoring an approx. 75%+ accuracy month on month. So I added the 22 for the twenty months between viewing and target time.
C0 + T0 + Pm0 + V0 + F0 + Ti22 +
S- Social.
This effect I feel is necessary because high profile and global effect targets like the U.S elections will/does get allot of global social interaction and noise. Knowing that time within remote viewing isn’t linear then future social noise probably has an effect on the target accuracy. With this election – there has been a huge amount of noise.
C0 + T0 + Pm0 + V0 + F0 + Ti20 + S1 +
Fi – Future intent
With some targets like the U.S elections one and the events surrounding myself and project manager, tasker and more. Its probable that with far-out predictions of this magnitude that my future intent will/did change and that this MAY have affected the project data.
In this example I also ran a secondary U.S. elections, Unitary ARV project for Cryptoviewing. This was started in 13, September 2019. This project using a single photo image tasked to me to project manage by my client, Cryptoviewing had its own set of calculations. But as things progressed over time towards the actual outcome, it’s sure that my needs and intent also changed. With two now predictions that TRUMP would win, my intent had obviously changed because being agnostic before, now I had two ARV projects in the public domain, and a client in Cryptoviewing to please, my intent was now conflicted, but MAYBE wanting a TRUMP win to appease by RV community, fans and my client at Cryptoviewing. This Future intent change – has to be factored in to any calculations that may have affect both project outcomes .
March: C0 + T0 + Pm0 + Ps0 + V? + F0 + Ti22 + S1 + Fi0 + Fa0 = 23
On reflection I would say the score in November probably would change to something more like:
November: C 1 + T1 + Pm1 + Ps0 + V? + F1 + Ti22 + S1+ Fi1+ Fa0 = 28
* The viewers intent and possible knowledge they had been involved in the project would also have an unknown effect on their data.
This November score shows the Clients intent (me) to have changed because I was now invested in both predictions being accurate within the RV community and for the client that came on the scene for the second ARV project on U.S elections in March 2020.
If a single predictive project had been done in say October of 2020 with me as the client, tasker and Pm, then it may have created a score like:
C0 + T0 + Pm0 + Ps0 + V? + F0 + Ti1 + S1 + Fi0 + Fa0 = 2
If I would have a second project in play for a client then this may be a score of:
C 1 + T1 + Pm1 + Ps0 + V? + F0 + Ti1 + S1+ Fi1+ Fa0 = 6
Conclusion:
In my Arv projects I feel that both the time between RV data/prediction and the prediction event, coupled with a second project, and with the projects being public. That this may have influenced the accuracy of the data in these predictions.
Future projects should be done:
- As close as possible to the prediction event to decrease TIME options
- Probably noted that public projects may change the INTENT from those involved due to wanting the predictions to be accurate and to please with the RV community
- Stick to one project or prediction. Additional projects for other clients may impact ALL projects due to intent and wanting to please other or newer clients.