A Brief Note

This is an online submission. I apologize if you were expecting a voice over for the demo. I am currently having some difficulty, but have annotated my demo video. Thank you. Demo starts at video time-stamp 0:31.

In the case of quality reduction after upload, please feel free to check out my deliverables here:

  1. Video link

  2. Prototype Link (please fit to width and height before viewing)

  3. Mapping on figjam

What is CIVICA?

CIVICA is an assistive dashboard tool for court rooms and legal systems made up of AI agents that are opinionated about fairness, not people, by staying descriptive and not interpretive.

CIVICA Breakdown

1. Staying descriptive and not interpretive

Feature: Holistic transcription that has time stamping, tracking who spoke when, spoken word transcription, voice inflection noting, gesture logging

NOT: automatically labeling someone as "angry", "guilty", "or deceptive"

Design Principle: Letting AI show users the signal, not decide the meaning

2. Center human oversight

Civica is a augmentation, assistive tool.

NOT: a replacement for judgement

Design Principle: AI systems supporting human decision-making, not replacing it.

3. Designing to help people entertain thought and have more insightful contestability

Feature: AI-agent ability to contest statements and suggest objections. Ability to flag actions for legal team.

Design Principle: Keeping users accountable.

4. Factual and fair. Not opinionated about people

NOT: Passing judgement on individual behavior

Design Principle: Restraint in communicating opinions and inability to influence with unverifiable products.

What kinds of uncertainties does CIVICA aim to help people navigate?

  1. As law becomes datafied, systems may be too complex or black-boxed. While keeping law factual and verifiable is great, it's grown too complex for the normal juror, witness, citizen to understand eassily.

  2. How might AI/we/tools assist judges, juries, and/or litigants in a fair and transparent way?

  3. How might we design for fairness in a system we can't fully understand?

  4. How might we make a system that is "technically fair" also feel just to humans?

  5. How can we keep AI from forming opinions that may put stakeholders at risk/influence jurors?

Project Growth Pains

When exploring modalities, I was very interested in creating a solution that had cognitive, auditory, and kinesthetic modalities. Cognitive modalities such as neurotechnology that utilizes brain-computer interfaces, neural implants, and cognitive prosthetics all sounded very inspiring and cool to me. However with more research, I realized that these cognitive modalities make justice more reliant on one's performance rather than actual facts. It creates a legal system where performance deeply influences outcomes rather than facts. This outcome is not desirable and didn't align with CIVICA's design principles, forcing me to focus on solely auditory and kinesthetic modalities.

When conducting speculative design, I knew I wanted to fall within the probable, preferable, projected cones. I felt that anything that fell into the plausible and possible cones would be a bit "too uncertain" for their to be understandable value proposition and projection. (The "Future Cone of Speculative Design" I am referring to: https://www.linkedin.com/pulse/speculative-design-strategic-innovations-future-le-de-vincent-rd3te/ )

Another challenge was getting a good understanding of the design space of legal settings like courtrooms. For this project, I had to research and have an understanding of legal motions, objections, legal jargon, information architecture in case & hearing descriptions, the many stakeholders, privacy laws, and design justice principles. But the topic was very intriguing and I genuinely enjoyed learning about it.

Built With

  • figjam
  • figma
Share this project:

Updates