Did Grok monitor my computer

I'm telling you. I hit enter and the reply popped up instantly
That is the definition of processing power. AI is predictive text like autofill. It is just autofilling much more than the next word. It’s autofilling an entire response.
 
That is the definition of processing power. AI is predictive text like autofill. It is just autofilling much more than the next word. It’s autofilling an entire response.

I'm seriously amazed. It was not a common topic and there were several new twists in my ask and it gave a coherent response instantly
 
I'm seriously amazed. It was not a common topic and there were several new twists in my ask and it gave a coherent response instantly
If I write up a strategy document to present at work, AI turns it into a well done and improved version in seconds. It reduces a week's worth of work. Way less wordsmithing and fewer sessions of strategy development. I appreciate its help.
 
BTW, I find automated grammar assistants quite lacking. They often use sub-par grammar and syntax. It still passes for most, but not technical documents
 
There is little to no privacy in the internet. Companies are tracking you constantly. Could Grok be monitoring its users? Wouldn’t surprise me. I do agree with comments about processing speeds as well, but I am leery of the power of big tech.
 
I use AI (Any Idiot) to answer all my philosophical life questions......... :eusa_whistle:
 
I had an idea for a total new type of affordable housing business -- details not relevant.

I typed out a page worth of points and questions on a word document on my computer including asking Grok several questions

I copied the query, pasted into Grok and not one second passed before Grok replied with comprehensive outline addressing each of the consideration I set forth

Not a second passed. I hit enter and the relied appeared instantly

How is that possible?
Future AI models are gonna be way faster than the ones we have now. :)

Perceived response speed (summary)

Total wait ≈ network latency + queuing delay + (per-token compute × number of tokens)

  • Network latency: round-trip time for request/first bytes (ms).
  • Queuing delay: server-side scheduling, batching, cold starts (ms–100s ms).

100 ms is shorter than a typical blink.

- Typical human blink ≈ 100–400 ms (often ~200–300 ms).
- 100 ms = 0.1 seconds, so it's at the very fast end of blink duration and may be barely noticeable.

  • Per-token compute: time to generate one token (depends on model, hardware, quantization).
  • Number of tokens: prompt length + generated tokens.

Notes:
  • Streaming can overlap compute with network return.
  • KV-cache and shorter prompts reduce effective compute.
  • Batching improves throughput but can add queuing delay for individual requests.
 
This is not a statement of levity I am serious when I say "ask.Grok directly". It will.provide we hope, an honest and somewhat objecrive response. If you choose "expert" level response or whatever they use to.describe it, you should probably experience a delay. All online, be it A.I or otherwise could have more access to our system than we imagine. Mwybe even access to your clipboard or something so it had a few seconds to assess what you copied, even ifnyou were never going to paste it? I dont know, worth asking Grok or GPT Chat about Grok. Who knows what agreement they made with government entities to rapidly expand their databases?
 

New Topics

Back
Top Bottom