A guy I knew way back sent me a research questionnaire, so I obliged and filled it out. I don't feel especially qualified to respond but I figured what the heck. The questionnaire is in plain text, my responses in italics.
-----------------
My name is Rich Erwin and I’m a first
year graduate student in the Foresight program at the University of Houston. I
am working on a project regarding what might constitute a more human-centered
Data Revolution, one which accounts more for individual needs of economic
value, self-worth, and self-representation, which is defined within the
questionnaire below.
My goal is to determine whether any trends exist among those who are among the leading creators, pathfinders, observers and analysts of this revolution in progress, to determine what ideas might need more visibility and consideration, and to potentially spark further discussion regarding the topic.
A
summary statement and nine questions are
provided below. I would be very grateful
for a moment of your time and consideration to answer each to the best of your
ability and return the answers to me by 5:01 PM Pacific Time on Thursday,
October 21st (12:01 AM UCT on Friday, October 22nd).
Any responses will remain within the context of my coursework.
If you have any questions or concerns, please do not hesitate in contacting me by email, text or voice.
Best
Regards,
[REDACTED]
Questionnaire
Humanity
has had agricultural and manufacturing revolutions. I believe that, with the maturing of the
internet and access to relatively cheap and vast computational power, how both
can be used together have placed us in the midst of a Data Revolution. However, as in the previous two revolutions,
we are dealing with significant issues of how we as individuals can and are
allowed to react to it in terms of economic value, self-worth, and
self-representation.
I
believe that you have a unique insight into this issue and would greatly
appreciate your responses to the questions below.
1)
What
tools, processes or precepts do you believe need to be in place in order to
drive, effect and maintain a more human-centered Data Revolution, and why?
A core element of being fully human is Autonomy: independence
from external control. Becoming adult is a pleasure because it increases
autonomy and jail is a punishment because decreases it.
Many or most Data Revolution (DR) technologies make
gestures toward autonomy in self-representation, (e.g. select your Profile
image, checkbox news feed interests); nourish your sense of self-worth (post
content that earns “likes”, block people who annoy you); and impact your
economic value to DR actors (respond to or block ads). These gestures seem
doomed to failure in an environment with Big Data analysis of shopping patterns
and cellphone data, stoplight cameras, and facial recognition applied to photos
outside your control [Here’s an example: an American was vacationing in Cuba in
violation of the embargo, and careful not to post vacation photos on the
internet. An unrelated person posted vacation photos with that person in the
background. The relevant American office used software to identify the American
and fine them. One can imagine many variations on this theme.]
A truly human-centered DR would preserve human autonomy,
and I have not the faintest idea how that might be accomplished. Perhaps ---
like freedom itself --- it can only be a guiding principle to be striven for
and improved upon, but never completely met.
2)
What
advice or recommendations would you provide to a young adult regarding how to
navigate their life in the midst of the Data Revolution?
Identify and rigidly control the Personally Identifiable
Information (PII) that can hurt you if disclosed, and use cammo without apology.
For example, many apps ask for your date of birth (DOB) that don’t really need
it; often they are simply verifying your age. If they get hacked – or rather
WHEN they get hacked – criminals will have a clue that’s helpful for robbing
you, e.g. electronically filing your tax return and collecting your refund to a
burner account. You need to camouflage your PII where possible. Pick a date –
such as January 1 – that is the DOB you’ll give out to those apps asking to
match your food preferences with your Star Trek character, and reserve your
actual DOB for those very few institutions that need it, e.g. your federal tax
accounts.
You can take similar precautions with other information: systematically misspell
your name, use a po box not a street address, etc. This is all chaff, and can
be defeated by a determined AI, but you can be less vulnerable.
There’s a tradeoff to all this of course. All that data about
yourself out there can be used to build deeper relationships with people
physically distant from you. Before DR, it was rare for friendships to survive
a cross-country move – which is why 25 Year Class Reunions used to be a big
deal. With DR, you can have a class reunion every Sunday. Embrace this.
3) How would
you rate the issues below as impediments to a human-centered Data Revolution
from most intractable (1) to least intractable (6)? (If you think any are not
at all intractable, please mark with an X and explain why.)
a)
Surveillance through access of “infinite content”
(5)
This is an inherently intractable problem, since in a competitive world there
is enormous benefit to an organization – whether geographic state or stateless
actor – that surveils its members and non-members. For example, if we established
an ideal scheme of controlling our government’s use of data, we have done nothing
about actions by other governments and they would have a competitive edge
against our government. The solution cannot be to give up but I don’t see the
solution yet.
b)
The effect of data algorithms in use on diversity, equity and empathy
(4) As
a person with mostly majoritarian characteristics (white, male, grumpy) I am
reluctant to comment on the impact of algorithms on diversity – I just don’t
have the life experience to make evaluations – but I can speculate that it
might be a mixed bag. Facial recognition is notoriously bad at identifying
nonwhite faces, to the significant disadvantage of POCs in this month’s
administration of bar exams across the nation ( https://abovethelaw.com/2020/09/online-bar-exams-rely-on-facial-recognition-tech-and-guess-what-its-still-racist/
)
Algorithms could be used to promote inequality (e.g. by identifying race based
on matching job applicants to their Facebook photos) or equality (e.g. by
sifting out actual job qualifications from irrelevancia) but I just don’t have
enough experience to comment further.
I would like to see DR to do something about empathy. Online personas famously divorce
a person’s humanity from their digital representations; the use of handles and nonhuman
avatars only make it easier to forget that behind some of these online postings
are actual people. I don’t see an easy solution.
c)
Social Media and the byproducts of “truth versus popularity”
(6) Algoritmic-based
selection of content seems to have the pernicious effect of presenting on the
basis of popularity rather than quality (and never mind truth ….) because the
most popular sites make money based on clicks. This is how they live: by
offering content you’ll click on.
I
think it’s wonderful – and surprising – that there are small efforts to doing
something about that – such as twitter/facebook amending posts about voting
with links to information about voting – but that doesn’t seem like enough. The
care that facebook/twitter took with Giuliani’s ridiculous laptop story is praiseworthy
but the pushback was enormous. I expect future scams to be a little more subtle
because the payoff is immense.
d)
Potential for conflict within society between individuals significantly
reliant on virtual versus physical reality
(3) As
of yet I don’t see this conflict, but who knows?
e)
Physical self-augmentation versus replacement by artificial intelligence
(2) I feel it most likely that AI and its junior
cousin VR control of drones will swiftly outpace physical self-augmentation, because
the latter has a weak point: the human body. For most tasks involving
interaction with physical reality, Ais will be better at everything except
taking sick days.
f)
The effect of the Data Revolution on the eventual advent of the Genetic
Revolution
(CRISPR, TALEN, Epigenetic Editing,
etc…), specifically with regard to restoration or prevention versus the desire for enhancement
(X) While DR/GR
interactions have grave potential for harm, they have plenty of opportunity for
benefit as well so I don’t see them as intractable problems; ultimately there
is a political question of who shall set up the rules for whose benefit
4) Which
would you consider the most problematic issue regarding “infinite content”?
a)
Surveillance by authorities or factions
***This
has always been problematic, but “in theory” could be controlled by an alert
citizenry with effective representatives. The theory does not always seem to
have been borne out in practice but I retain hope.
b)
Treatment of your data and actions online as monetizable content
***This is
really annoying – I want a piece of the action – but not a direct harm to me,
so I would rate it as least problematic. It reminds me of the biotech industry
patenting genes taking from patients without compensation, e.g. the Henrietta
Lacks scandal. I despair of any justice there.
c)
Potential for attempted behavior modification
***This is the most problematic issue, since it directly impacts personal
autonomy.
d) Self-restriction of access to a
diverse set of viewpoints over time
***This is problematic indeed, but it’s a self-inflicted wound and so not
quite as bothersome as 4c. Still, we see the negative impacts of this
self-restriction playing out in the current American elections as each party
becomes more convinced the other is the Devil.
e)
Potential for creation of artificial constructs of yourself
***I don’t see this as problematic to the extent that contructs would be controlled
by myself. It could be helpful to agent myself for routine tasks, although
collating the various experiences of my selfstream sounds like an interesting
problem. Constructs controlled by
someone else sound creepy; I imagine state entities might want them to predict and
modify my behavior (see 4c); I would like a ban on private entities doing the
same.
f)
All five are equally problematic
***Disagree
g)
No such issue exists
***Disagree
5) By what
standardized means should data algorithms be audited for potential issues
regarding diversity, equity and empathy?
a) Emphasis on frequency – at least
once a year
*** Audits should be automated so they can be continuous, just as virus
checking is continuous.
b) Emphasis on auditing outside the
entity that uses the algorithms in question
*** Internal audits are nice for an organization determined to avoid
problems, but outside audits are necessary
for the same reason that we have virus checkers separate from the app
provider.
c) Emphasis on standardized
governmental regulations that cover a) and b)
*** Governmental regulations set minimums that are (in theory) determined by
the public (so there’s more confidence in them) but perhaps it is more
important that they set up a level playing field, without which there can be a “race
to the bottom” – incentives for cheating.
d) Emphasis on insurance companies
requiring audits in return for coverage against lawsuits
***In an analogous situation: some of the best Continuing Legal Education teaching
lawyers to avoid ethical problems like to result in lawsuits comes from
insurers (such as Attorney Protective) that provide them in an obvious and
laudable attempt to minimize problems and payouts. IMO this works because any
law to immunize lawyers from being sued would be a tough sell to an elected
legislature hahaha!
OTOH DR firms have enough money to purchase a bill immunizing them; also they can
word the standard user agreement to send things to arbitration (and arbitrators
usually favor them), and if nothing else just stretch out litigation until it’s
pointless (as the saying goes: “Delay, Deny, Until They Die”). TL;DR: lawsuits may
be nice but are unlikely to be an answer.
6) Given the
state of social media, would you allow someone you care about to be introduced
to it?
a)
No
b)
Yes, but only under very strict regulations, controls and rules
c)
Yes, but only after educating them to improve their awareness, and with
some controls and
rules in place
d)
Yes, but only after educating them to improve their awareness
e)
Yes
It depends on my relationship to them; if they’re some one I’m responsible for
(e.g. child) then “C”; if someone for whom I have no responsibility other than
friendship then “D”.
7) Would you
support increased surveillance that focused on significantly reducing
cyber-bullying?
a)
No
b)
Yes, but under very strict laws, controls and rules
c)
Yes, but accompanied by extensive public education of the process
d)
Yes
***I do not know enough about this subject to form an opinion. My uninformed
opinion is that DR platforms should enforce Do-Not-Contact requests, so that if
A is bullying B, then B can block A without any need to give a reason. I don’t
know whether that would solve the problem.
8) Do you
think people who spend significant time in virtual realities might eventually
become a potential threat to physical reality?
a)
No
b)
No – people will always perceive what they want to perceive as their dominant
reality
c)
Yes – some people aren’t able to put each reality in perspective
consistently
d)
Yes – and we need laws to counter the threat
ALL human
perception of reality is a virtual reality (VR). Our field of vision is limited
and we are constantly filling in the gaps depending on what we looked at last.
Many car accidents result from this sort of thing; that’s why many side mirrors
have a printed warning “Objects In Mirror Are Closer Than They Appear”.
What I
believe this question is referring to is electronically supported VRs which take
things to new levels of divorcing our human perceptions and reality creation
from what exists in the physical world. People
staring at their cell phones have the perception that they are aware of the
road because their brains are filling in the gaps, right up to the crash. Drone
operators thinking they are attacking enemy troops end up killing civilians. As
long as VRs are connected to real-world tools, the opportunity for threats
exist. (See John Scalzi’s “Lock-In” series for a light – but not light-hearted –
examination.)
If the question concerns VRs not connected to physical tools, perhaps the
danger is less. There is still the risk of harm from hacking data – but that’s
not unique to VR – and self-harm from neglect of physical reality – but again
that’s not unique to VR, IMO.
9) If your
employment required physical augmentation of your body versus being replaced by
an artificial intelligence construct, would you do so?
a) Yes
b) Yes, if who I worked for covered
the cost of installation
c) Yes, if who I worked for covered
the cost of instillation, replacement and eventual removal
d) Yes, if extensive regulations for
my protection were in place
e) No
If I was
going to be paid for the work of my AI self, I’m cool with that. I assume that
the question is instead that the employer gives the choice to take the
augmentation or lose my job. No reasonable person would not trust an employer’s
representation as to the safety of the augment; a third part would have to
certify its safety.
(In this
context the “SeaWorld v Perez” case is instructive: SeaWorld told its employee
Dawn Brancheau that its working conditions were safe but they in fact were not;
while the majority upheld OSHA’s authority to require SeaWorld to do something
about that, the dissenting Judge Kavanaugh argued that the employee had assumed
the risk. Kavanaugh is not on the Supreme Court. https://caselaw.findlaw.com/us-dc-circuit/1663286.html)
The nature
of the augmentation and its reversability would be elements of my choice.
No comments:
Post a Comment