Latency, UX and LLMs

Latency is well understood to be detrimental to the user experience. No one likes to wait.

But there are multiple ways this affects the UX

  • Annoyance – I hate waiting for the page to load
  • Throughput – I get less done because of the time it takes to complete tasks
  • Success Rate – My E-commerce site has lower sales even if the latency just goes up a little bit.
  • Control Fidelity – This video game is hard to control because the latency makes things squishy
  • Trust – Because it takes some time to confirm my actions I am never sure what is going to happen
  • Anxiety – I hate waiting for important results that take a long time to process.

In some cases the latency of interaction can change our relationship to the entire process. We see this a lot in software development where the turnaround time from making a code change to seeing the result affects the way we think about the process. A tight iteration loop supports more experimentation over deep thinking which depending on what you are doing could feel good or bad.

For UX coding in particular, its nice to have a quick loop because it often just takes a lot of tweaking to get the final result you want and we definitely see that any of these tools try to provide very low latency ways to develop. When the cost of making a change is a few minutes you have to change your way of thinking about it.

My favorite example of this kind of thing is in Turn by Turn navigation systems. When I first used them, I found them very frustrating because while they seemed to give the right directions, they responded very slowly if I made a mistake and missed a turn etc. Lots of beeping and shouting “Re-routing, Re-routing”. The overall experience was not great and made me feel dumb.

As re-routing got faster the UX changed dramatically. If you miss the turn now, it usually quickly recomputes a new one without complaint and you are back in business. Rather than feeling like its scolding you, it feels empowering. You follow the directions or your fail to, but at every moment you have clear guidance about the best option and you feel like you can always succeed.

LLMs

I’ve been thinking a bit about how LLMs and UX interact. One unfortunate thing about LLMs is that they do take some time to generate their results. When we see an example online we usually see the fully formed answer and can be impressed with all that the LLM was able to produce, but we don’t have to wait for it read out the words one at a time.

Getting that one word at a time feedback is nice but its also not great for every situation. GitHub Copilot waits until it has a full suggestion and then pops it into my IDE. This seems like the right experience since if it was dynamically changing all the time, it would probably drive me crazy. I do experience some anticipation/frustration effect wondering if its going to help. It also means that the people making CoPilot have make some decisions about how much time to give to the LLM and when to make a suggestion.

I’ve done some experiments using LLMs to auto generate longer text from a set of “notes”. Its pretty amazing what GPT4 can do here, but it results in an interesting UX situation. If I want to make changes to the generated text I can certainly edit it by hand, but if I want to keep using GPT to help me, then the chat UX is lacking.

Maybe I realize there was a mistake in my notes that confused GPT. I could change the output text but it might also be appropriate to correct the notes. I can fix them and regenerate but each time this happens the wait seems longer and longer. I only want to redo the parts that need to be redone.

In a very simple case, say where the change is at the end of the generated text, I can imagine a UX which lets me select the part I want to regenerate and the first part is sent in again as the context for the completion. This seems like it would work ok but doesn’t solve the problem for when the edit needs to be at the beginning and could potentially exhibit more subtle issues as well.

In general People really like the idea that LLMs are able to be driven by natural language inputs, but as we all know natural language itself is not always easy to use or specify. This is part of why we consider prompt engineering to be a thing. I’m optimistic that as more people get access to and play with this technology a bunch of new concepts for how to interact with them will emerge.

There’s a lot of effort going in to all kinds of optimizations for LLMs in terms of speed, size, and accuracy; but I hope we also see some amazing new ideas in UX for LLMs.


Posted

in

,

by

Tags:

Comments

Leave a Reply

Your email address will not be published. Required fields are marked *