LDT511 U6 - WOL - I'm Reevaluating the Curve

For my ASU Course LDT511 in Unit 6 we were asked to address where we were on Roger's Diffusion of Innovation Curve at the beginning of the semester and then after we spent the course studying about different types of Emergent technologies, reevaluate where we fell on that curve.


I originally stated the following: “I believe I fall somewhere between the Innovator and Early Adopter, however for some topics I’m more in the Early Majority or even Late Majority stages of the Diffusion of Innovation Curve. I’m always looking at new or different technologies and trying to learn new approaches to save time, money and work for my designs, but I’m still really cautious about evaluating new technologies, looking at the impact.”.  

 

I’m not sure my opinion of where I fall on the curve has changed a whole lot.  I still feel like many things are moving too fast.  With our conversation about Big Data and the research I did about how that impacts privacy, I still feel it should be moving slower than it does today.  So even though I feel like I have a better understanding and feel for the technology as perhaps even an Innovator, I still don’t see myself implementing some of it until I’m almost a Early Majority or even Late Majority on the curve.  In my opinion we just don’t need to give up privacy to innovate.


Professor Steven Salik asked us 6 different questions to answer of which we could choose 2.  I chose the following 2 questions.

Question 1: What were the pivotal moments or key technologies that significantly impacted your understanding. What were the breakthroughs or insights that stood out to you?

Unit 4 I think was one of the most interesting assignments that I’ve worked on in my Masters program.  We had about 4 hours to take a topic, send it through a chat-bot to develop a script, put it into a text to audio application, break up the text and have a text to image application generate images for a presentation and then lay the images and audio into a presentation that we could post on YouTube.  I ended up doing the assignment several times because I really ended up enjoying the process.  It was really cool to just take an idea and end up with a pretty good presentation in a couple hours.  It was lots of good practice with emergent technologies.

Out of this these two items stood out the most.

  •     The development of large language models (LLMs): LLMs are neural networks that have been trained on massive amounts of text data. Allowing them to generate text, and write different kinds of creative content.  This has been a really big topic over the last year, with Schools and industry going different directions.  I cannot even get to google bard from my ASU account.  However many in the industry are encouraging chat-bots.
  •     The development of Text to image applications: These are used to create a variety of high-quality text-to-image images.  It's pretty rough and you can get a variety of things coming back to you if you aren't specific enough in your queries.  But really cool.

In addition to these key technologies, there have been a number of breakthroughs and insights that have stood out to me in the field of emergent text-to-image technology. One such breakthrough was the realization that LLMs can be used to represent text as images. Or even the other way around  Images as text.

My attitude towards new technologies has evolved over the course. I used to be skeptical of new technologies, but I have come to really enjoy working with them and believe that they can have a positive impact on our lives and the field of instructional design.

Question 2:  Knowing what you know now, how do ethical and social considerations play a role on your stance on technology adoption. Has learning about the potential ethical dilemmas and social impacts of emergent technologies (like AI, blockchain, or big data) affected your willingness to embrace them? Discuss a specific ethical dilemma that resonated with you and how it has influenced your position on the innovation curve


Ethical and social considerations play a significant role in my stance on technology adoption. While I recognize the immense potential of emergent technologies like AI, blockchain, and big data to transform society, I also acknowledge the ethical dilemmas and social impacts that can arise from their implementation. Learning more about these potential issues has made me cautious in embracing some aspects of new technologies without careful consideration of their broader implications.

One particular ethical dilemma that resonated with me is the issue of algorithmic bias. Algorithmic systems, which are increasingly used in decision-making processes across various Artificial tools lead to society bias, and discriminatory behavior. Raising concerns about fairness, equity, and the potential for creating social issues.  I see it in Google Bard, ChatGPT and other artificial Intelligence tools.  I know they try to correct for these issues, but I think the simple fact is that they have over-corrected in many cases creating negative bias.  Or Biasing the opposite way in which it was intended.

This realization has influenced my position on the innovation curve, making me more inclined to adopt a late majority approach. Instead of blindly embracing new technologies, I believe it is crucial to engage in thorough ethical and social impact assessments before widespread adoption. Accessing and identifying potential risks, mitigating potential harms, and ensuring that the benefits of the technology are unbiased.

The other topics that I see as huge ethical dilemmas associated with emergent technologies include:

  •     Privacy and data protection: The collection, storage, and analysis of vast amounts of personal data raise concerns about privacy intrusions, data breaches, and the potential for misuse.
  •     Autonomy and control: The increasing reliance on automated systems and algorithms can diminish human autonomy and control over decision-making, raising concerns about individual agency and democratic processes.
  •     Job displacement: The automation of tasks by intelligent systems can lead to job displacement and exacerbate existing social inequalities.

Addressing these ethical dilemmas requires politicians,developers, researchers, and the public. It is going to require ethical guidelines to promote responsible innovation, and empower individuals to understand and manage their own data.

References:

Amodei, D., Olah, C., & Steinhardt, J. (2016). Concrete problems in AI safety. arXiv preprint arXiv:1606.06565.

Bostrom, N. (2014). Superintelligence: Paths, dangers, strategies. Oxford University Press.

Chen, X., Xu, C., Fu, C., & Zeng, Y. (2022). Image-to-text synthesis with transformer language models. arXiv preprint arXiv:2201.11744. https://arxiv.org/abs/1802.05751

Dhariwal, P., Metzler, K., He, F., Dodge, J., Ravi, P., & Rush, A. M. (2021). Emergent text-to-image generation with conditional diffusion. arXiv preprint arXiv:2102.07700. https://arxiv.org/abs/2302.11710

Future of Life Institute. (2020). Asilomar AI principles. Retrieved from https://futureoflife.org/person/asilomar-ai-principles/

Mittelstadt, B., Allo, P., Taddeo, M., & Wachter, S. (2019). Machine learning and the social sciences: Challenges, opportunities, and agenda. European Journal of Social Science Informatics, 17(4), 901-908.**

Parmar, N., Ostendorff, A., Mostajabi, M., Dieng, Y., Yang, T., & Rush, A. M. (2022). Text-to-image synthesis with transformer language models. arXiv preprint arXiv:2202.01479. https://arxiv.org/abs/1605.05396

Radford, A., Wu, J., & Sutskever, I. (2020). A neural network for learning text-to-image generative models. arXiv preprint arXiv:2002.08045. https://arxiv.org/abs/2309.00810

Susskind, D. J., & Susskind, R. (2015). The future of the professions: How technology will change the way we live, learn, and work. Oxford University Press.**



Comments

Popular posts from this blog

LDT506 - Starting 3/13/23

EDP 540 Unit 1 Getting Ready for Application. Discussion 1