Tools and Platforms: By Kranzberg’s Laws

I was interested in doing my blog post for this week because over the last few years at my various jobs, I had access to the curriculum of K-12 schools and how the students, their teachers, and parents react and interact with the educational system today. Living in this age when we find the big, solve-all solutions to our inquiries and problems through tech, one question that often pops up in my head is at what point does technology cease to be helpful and becomes harmful to its creators?

I often think back to my time working as a tutor for the NYPL and one of my students asked me how to spell a word and what is its definition. I pointed out to him the stack of dictionaries that was sitting in the middle of the room collecting dust. He told me he never learned how to use a dictionary, and it was the same with the other students in our tutoring groups. He quickly picked up a laptop and said “Why can’t I just type in what I think it is spelled like, the laptop will tell me everything I need to know.” This interaction is still so clear to me years later, because in the effort of not trying to fall into the rabbit hole of believing that we currently exist in the start of the prequel film of Pixar’s Wall-E, this kid completely blew my mind. The thought that the skills of how to use a dictionary was ingrained in me before I was his age because it was part of the learning skills necessary. And now he just taught me how life skills have changed drastically and concepts can become obsolete in a short period of time.

What really struck me is historian Melvin Kranzberg’s Six Laws of Technology. The first law states, “Technology is neither good nor bad; nor is it neutral.” This was referenced at the beginning of Zheng, Binbin, Mark Warschauer, Chin-Hsi Lin and Chi Chang’s “Learning in One-to-One Laptop Environments: A Meta-Analysis and Research Synthesis,” and I felt that this reference of Kranzberg’s laws could be helpful to synthesize the various arguments flowing through all the readings. You need humans to work, create, and program the technology, but at what point does technology, especially ed-tech, stop being an input for humans’ intellectual, cultural capital, and the biases that can unfortunately come along with it and become the mechanical void of the neither good nor bad nor neutral?

According to Merriam-Webster Dictionary, a tool in the technological sense is something used in performing an operation or necessary in the practice of a profession, and a platform is a vehicle used for a particular purpose to carry a usually specified kind of equipment. The tools and platforms highlighted in this week’s readings highlight the fine balance in educational technologies being positive tool for human learning or impartial for educational needs. Gaggle, according to Caroline Haskins, gloats that it not only provides data structure in a school system by providing tools and a platform for student-teacher work and communication, as well as saves lives from suicide at the cost of constant surveillance and lack of student privacy. While other algorithms such as AI grading programs that, as Lauren Katz explains, are designed and trained by humans to optimistically be as useful as possible to the purpose of assisting students, teachers, and writers . But that comes with the risks of biases steeping into the system because the technology can only work with what it’s given. We also have situations such as Pearson’s move to a Digital-First strategy. As Lindsay McKenzie explained, the plan is to slowly limit the production of print textbooks on a time schedule as the company focuses on its digital platform and course materials. With the implementation of this tool, not only will the digital platform and its course materials will be more frequently updated include newer research developments, technologies, and breakthroughs, it will also to be set at a less expensive renting rate opposed to buying expensive textbooks.

After doing each of the readings for this week, the words “tools” and “platforms” that title this week’s sections was uncanny to me in referring back to the Kranzberg law. In all the readings this week, there is a main situation or problem dealing with technology as an educational component and I was left grappling with questions of what makes this technology neither good nor bad nor neutral. There were examples in the readings that showed how technologies can positively uplift some, negatively impact others, or not make a difference. I believe Audrey Watters asked some of the important questions for us to discuss together when it comes to educational technology: “What do we need out of educational technology? Are we only interested in rousing test scores or learning efficiencies?” These tools and platforms in their own right can make great impact on our learning processes, which should be the main reason why there is a major boom in educational technology that does not seem to be dying down any time soon. But human reasoning needs to be analyzed and at the focal point on how we proceed with educational technology because it does not exist on a moral basis of being good, bad, or neutral.

4 thoughts on “Tools and Platforms: By Kranzberg’s Laws

  1. Kathleen Begonia

    Thank you, Jelissa. I definitely agree that we really need to use our human reasoning to analyze how we approach educational technology. I like how you highlighted: “Technology is neither good nor bad; nor is it neutral.” It really depends how we use technology to do our work–is it supporting your work or creating an obstacle? I think we really have to be critical when we choose our tools and ensure it supports the work we do so we don’t compromise the people we’re trying to help. A tool that we perceive as beneficial may not always work best for our students.

  2. Anthony Wheeler (he/him)

    Hi Jelissa,

    Awesome post! I’ll start by saying something controversial: I do not get the hype around Wall-E. People LOVE that movie, and I just found a lot of it strange and unnerving. You definitely asked the heavy question, when does ed-tech stop being a dumping ground for human knowledge and imperfection, and how do these imperfections manifest in the output? I think Kathleen made a good point in saying it’s about how we incorporate the tool to accomplish a greater goal, without making it the foundation of the goal so that we don’t compromise the entire thing. I’ve encountered many projects where the tool being developed had good intentions but was only reinforcing the problematic features of the initial problem.

    An everyday example of this was something a friend of mine had encountered a couple of weeks ago: Do you guys remember how around Pride week the MTA had hung anti-homophobic service information signs with various LGBTQ flag designs in the stations? They were solid and made sense given the type of behavior they were combatting. WELL, a couple of weeks ago there were signs hung with a similar message in light of the prejudices surfaced due to COVID-19. They said, “No ignorance, racism, or xenophobia allowed at this station at any time” with reminders to wash hands and stay informed. Fine, however, this service information sign featured the colors and symbols from the Chinese flag. There was a dangerous level of tone-deafness in this idea, especially given all of the racism towards Asian Americans due to ignorance of even our president.

    I know that may seem like a random story and like I might be rambling (and I might be a little, I’m on day 20 of quarantine) but there’s a point! Think of the sign as a technology, which by all means, it is. It performs an action to make our lives easier. However, what happens when our implicit bias seeps into said technology? It’s like how I mentioned the Princeton study on AI a few weeks ago, these technologies we’re building in ed-tech to solve problems and make our community members’ lives easier could easily inherit the implicit biases (of multiple types) and negatively affect the same people we are trying to help.

    In the end, IS there a way to move away from this? Or do we have to wait for machines to become sentient so that they are free of human bias/imperfections and can decide for themselves? Maybe there’s a reason that concept is, unfortunately, the basis for many terrifying sci-fi film & media concepts, because it feels far from the present.

  3. Luke Waltzer (he/him)

    Terrific post and discussion thus far!

    When we meet later, I want to be sure that we ask specific questions about the *affordances* of these emerging tools and technologies, and how those affordances structure specific ways of thinking, working, teaching, and knowing. Important also are the experimental design that’s driving and reinforcing many of the assumptions that flow through this work, as well as the rhetorical construction and expansion of thinking about ed tech.

    The meta-analysis on “One-to-One Laptop Environments” that Jelissa cites offers a broad overview with some limited conclusions… but it does offer us a sense of how narrowly educational research is examining the impact of such massive interventions (i.e., we can only really measure what is captured via standardized tests). And be sure to look at the APLU Adaptive Courseware Initiative and linked projects–what can we learn about adoption of such technologies through how this project is being presented?

    Looking forward to seeing you all soon-

  4. Jason Holt

    I’m thinking a lot about that last question “What do we want from educational technology?” It seems not long ago, the answer was a clever or different way of learning, sometimes just a motivation or change of pace. Now the answer is much heavier. We are asking it to replace an entire educational system, and maintain the infrastructure and the community. Can it do it? Stay tuned….

Comments are closed.