[{"content":"Abstract This paper examines addiction through an evolutionary lens, proposing that both substance and behavioral addictions reflect the interaction between evolved neural mechanisms and modern environmental contexts. Drawing on recent neuroscientific research, this paper explores relationships between the mesolimbic dopamine system, stress response pathways, and endogenous opioid circuits, all adaptations that enhanced survival and reproduction in ancestral environments, create vulnerabilities to addiction when exposed to evolutionarily novel stimuli. The evolutionary mismatch theory helps explain why neurotypical individuals develop addictive patterns despite harmful consequences, with substances directly manipulating neural pathways that evolved for natural rewards, while behavioral addictions engage these same circuits through supernormal stimuli that trigger evolved psychological mechanisms related to resource acquisition, status competition, and social bonding. This evolutionary framework suggests novel approaches to prevention and treatment, emphasizing environmental modifications that reduce mismatch and interventions that work with rather than against evolved psychological mechanisms. By conceptualizing addiction as an interaction between our evolved biology and contemporary environments, this perspective integrates findings across levels of analysis and provides both clarity and guidance for addressing one of society's most persistent health challenges.\nKeywords: addiction, evolutionary psychology, neuroplasticity, reward processing, behavioral addiction, substance use disorders, evolutionary mismatch, dopamine, stress response, treatment approaches\nAddiction: Behavioral and Neural Predispositions \u0026mdash; An Evolutionary Perspective Addiction represents one of the most persistent challenges in modern healthcare, with both substance and behavioral manifestations causing significant individual suffering and societal costs. Despite decades of research, prevention and treatment outcomes remain suboptimal, suggesting the need for more comprehensive theoretical frameworks to understand these complex disorders. This paper proposes that evolutionary theory provides a unique lens through which to examine addiction, offering insights into why humans possess behavioral and neural predispositions that render us vulnerable to both substance and behavioral addictions in contemporary environments.\nThe evolutionary perspective views addiction not merely as a pathology but as the result of normal brain mechanisms interacting with evolutionarily novel stimuli and contexts (Hunt et al., 2024). This framework helps explain why neurotypical individuals develop addictive patterns despite their harmful consequences and provides a foundation for understanding individual differences in vulnerability. By examining addiction as an evolutionary mismatch, where adaptations that were beneficial in ancestral environments become maladaptive in modern contexts, we can better conceptualize prevention and treatment approaches.\nThis paper analyzes current research on the behavioral and neural predispositions underlying addiction vulnerability, examines the shared and distinct pathways between substance and behavioral addictions, and explores implications for clinical intervention. Throughout, an evolutionary perspective serves as the integrative framework, connecting disparate findings into a coherent understanding of why these vulnerabilities exist and how they manifest in contemporary human experience.\nEvolutionary and Neurobiological Foundations of Addiction Recent integrative approaches to understanding addictive disorders have emphasized the value of evolutionary perspectives in contextualizing both substance and behavioral addictions (Hunt et al., 2024). Examining these conditions through an evolutionary lens not only illuminates why human brains are vulnerable to addiction but also helps explain the shared neurobiological mechanisms that underlie seemingly distinct addictive patterns. This section analyzes current knowledge on the behavioral neuroscience of addiction within an evolutionary framework.\nNeurobiological Mechanisms Through an Evolutionary Lens The mesolimbic dopamine system, which functions as the brain's primary reward circuit, evolved to motivate behaviors essential for survival and reproduction (Alcaro et al., 2007). This system centers on dopaminergic projections from the ventral tegmental area (VTA) to the nucleus accumbens (NAc), a pathway that encodes not just hedonic pleasure but prediction errors and incentive salience (Alcaro et al., 2007). When examining addiction from an evolutionary perspective, it becomes clear that this ancient motivational system becomes pathologically engaged regardless of whether the stimulus is a psychoactive substance or a naturally rewarding behavior (Alcaro et al., 2007).\nEvidence from neuroimaging and electrophysiological studies demonstrates that similar patterns of dopaminergic activity occur during both substance use and engagement in potential behavioral addictions such as gambling, gaming, and sexual behaviors (Alcaro et al., 2007). The critical distinction lies not in the final common pathway but in the initial mechanism of action. While substances directly bind to receptors and artificially stimulate neurotransmission, behaviors activate these same circuits through engagement with evolved psychological adaptations related to resource acquisition, status competition, and social bonding (Hunt et al., 2024).\nNeuroplasticity and Learning Mechanisms The remarkable neuroplasticity that allowed human ancestors to adapt to diverse environments also creates vulnerability to addiction. Long-term potentiation (LTP) in glutamatergic synapses of the mesolimbic pathway creates powerful associative memories that drive compulsive seeking behaviors in addiction (Kalivas \u0026amp; O\u0026rsquo;Brien, 2007). These neuroadaptations represent a form of pathological learning, where evolutionarily novel stimuli or behavioral patterns trigger exaggerated responses in systems designed for different selection pressures.\nThis learning process involves substantial changes in synaptic structure and function, including alterations in AMPA receptor trafficking, dendritic spine morphology, and transcriptional regulation that persist long after cessation of the addictive stimulus (Kalivas \u0026amp; O\u0026rsquo;Brien, 2007).\nStress Systems and Negative Reinforcement The relationship between stress and addiction exemplifies the evolutionary mismatch concept central to understanding these disorders. Corticotropin-releasing factor (CRF) signaling in the extended amygdala, which evolved as an adaptive response to environmental threats, creates negative emotional states during withdrawal that drive continued substance use or behavioral engagement (Zorrilla et al., 2014). This phenomenon, known as negative reinforcement, represents the relief from aversive states rather than the pursuit of pleasure that characterizes later stages of addiction.\nFrom an evolutionary perspective, these stress responses evolved to motivate adaptive behavioral responses to threats. However, in the context of addiction, they become dysregulated and ultimately perpetuate maladaptive patterns. The hypothalamic-pituitary-adrenal (HPA) axis, which coordinates physiological stress responses, shows similar patterns of dysregulation across both substance and behavioral addictions, with evidence of altered cortisol responses and blunted stress reactivity in chronic conditions (Adinoff, 2004).\nEndogenous Opioid Systems The endogenous opioid system, which evolved primarily to regulate pain and pleasure responses to natural rewards, plays a crucial role in both substance and behavioral addictions (Merrer et al., 2009). This system modulates hedonic experiences across domains, from food consumption to social bonding, through the release of endorphins that act on opioid receptors throughout the brain's reward circuitry (Merrer et al., 2009). In behavioral addictions, endorphin release during rewarding activities creates experiences similar to exogenous opioid administration, though typically at lower intensities (Roth-Deri et al., 2008).\nResearch by Merrer el at. (2009) demonstrated that both substance use and engagement in potentially addictive behaviors activate similar patterns of endogenous opioid release in the ventral striatum, suggesting a common hedonic mechanism. This shared neurochemical response helps explain why activities as diverse as gambling, gaming, and sexual behaviors can produce addictive patterns reminiscent of substance dependencies.\nDistinct Neurobiological Pathways Despite these shared mechanisms, important distinctions exist in how substances and behaviors engage the brain's reward system. Substances directly alter neurotransmission through receptor binding, enzyme inhibition, or reuptake, producing effects that can be more intense and immediate than natural rewards. In contrast, behavioral addictions typically activate natural reward pathways through engagement with evolutionarily relevant domains like status, social connection, or resource acquisition (Hunt et al., 2024).\nThis distinction helps explain differences in addiction likelihood and progression. The direct pharmacological effects of substances can more rapidly dysregulate neural circuits than behaviors, which may account for differences in prevalence and severity between substance and behavioral addictions. Nevertheless, the fundamental vulnerability of reward systems to pathological engagement remains consistent across addiction types, reflecting their common evolutionary origins.\nImplications for Treatment Approaches Understanding the shared evolutionary and neurobiological foundations of addiction has significant implications for treatment. Medications that modulate shared neurochemical pathways, such as opioid antagonists like naltrexone, have shown efficacy in both substance use disorders and behavioral addictions like gambling disorder (Ward et al., 2018).\nThe evolutionary perspective also suggests that environmental modifications that reduce mismatch between our evolved psychology and modern contexts may be powerful preventive tools. Limiting availability, reducing exposure, and creating social environments that satisfy evolved needs through adaptive rather than maladaptive means represent promising directions for prevention efforts based on evolutionary principles.\nBehavioral Addictions: Evolutionary Insights into Novel Disorders The emergence of modern behavioral addictions, such as internet gaming disorder, presents both challenges and opportunities for addiction science. Evolutionary theory provides a unique lens for understanding these conditions, conceptualizing them not as entirely novel disorders but as manifestations of evolved psychological mechanisms interacting with unprecedented environmental stimuli.\nSupernormal Stimuli in Digital Environments From an evolutionary perspective, many behavioral addictions involve engagement with supernormal stimulus, which are artificial stimuli that trigger evolved psychological mechanisms more intensely than the natural stimuli these mechanisms evolved to process (Goodwin et al., 2015). Video games, for instance, provide more concentrated and immediate rewards than the resource acquisition and status competition activities they mimic, potentially engaging evolved reward systems in ways that natural activities cannot match.\nThis concept helps explain why activities without chemical intoxication can produce addiction-like patterns of behavior. The digital environment, in particular, has created unprecedented opportunities for exposure to supernormal stimuli that trigger evolved psychological mechanisms related to social comparison, status competition, sexual behavior, and exploration. Social media platforms, for example, provide constant opportunities for social comparison and status evaluation (activities with clear evolutionary relevance) but at frequencies and intensities that far exceed ancestral experiences.\nDomain-Specific Vulnerability The evolutionary perspective suggests that behavioral addictions are not randomly distributed across possible activities but cluster around behaviors with evolutionary significance. Gambling taps into risk-assessment and resource-acquisition mechanisms; internet gaming often involves status competition and coalition formation; problematic social media use engages social comparison and reputation management systems; and problematic sexual behavior activates mate-seeking and reproductive mechanisms (Hunt et al., 2024).\nThis domain-specificity helps explain patterns in behavioral addiction prevalence and comorbidity. Activities that engage multiple evolved psychological systems, such as internet gaming, which can simultaneously engage status, exploration, and social bonding mechanisms, may be particularly likely to produce addictive patterns. Similarly, individuals with heightened sensitivity in specific evolved domains may show selective vulnerability to behavioral addictions that engage those domains.\nCultural Evolution and Addiction Vulnerability The evolutionary perspective on behavioral addiction must account not only for genetic evolution but also cultural evolution, such as the transmission and modification of behaviors, technologies, and institutions across generations. The rapid pace of cultural evolution, particularly in digital technology, has outstripped genetic evolution, creating unprecedented mismatches between our evolved psychology and contemporary environments (O et al., 2024).\nThis mismatch is exemplified by the design of digital technologies, which increasingly incorporate features specifically engineered to maximize engagement through exploitation of evolved psychological mechanisms. Variable reward schedules, social validation features, and artificial scarcity mechanisms in digital platforms parallel similar features in gambling machines, suggesting convergent cultural evolution toward designs that maximize addictive potential.\nPrevention and Treatment: Evolutionary Applications The evolutionary perspective on addiction has significant implications for prevention and treatment approaches. By understanding addiction as an interaction between evolved predispositions and modern environments, interventions can be designed to better address the fundamental mechanisms underlying addictive behavior.\nEnvironmental Modification Perhaps the most direct implication of evolutionary mismatch theory is the importance of environmental modification in prevention efforts. If addiction vulnerability stems largely from the interaction between evolved psychology and novel environments, then altering those environments represents a powerful preventive approach. This might include:\nReducing unnecessary exposure to addictive substances and behaviors, particularly during sensitive developmental periods Designing digital environments that satisfy evolved psychological needs without exploiting vulnerabilities (e.g., social media platforms that facilitate genuine connection rather than maximizing engagement) Creating physical and social environments that provide natural rewards aligned with evolved psychological mechanisms These approaches recognize that individual-level interventions alone may be insufficient when environmental pressures toward addiction remain strong. Just as public health improvements in sanitation reduced infectious disease more effectively than individual treatment, environmental modifications may prove more effective than focusing solely on individual vulnerability factors.\nEvolutionarily Informed Clinical Approaches At the clinical level, evolutionary theory suggests several promising treatment directions. First, treatments that work with rather than against evolved psychological mechanisms may prove more effective than those that ignore or contradict these mechanisms. For instance, contingency management, which employs rewards like money, vouchers, or privileges to incentivize positive behavior, capitalizes on our innate sensitivity to immediate rewards to compete with the reward qualities of addictive substances or behaviors.\nSecond, interventions that address the specific evolved mechanisms engaged by different addictions may prove more effective than one-size-fits-all approaches. For gambling disorder, this might involve specifically addressing risk assessment and probability processing; for internet gaming disorder, interventions might target status needs and social belonging through alternative channels.\nThird, the evolutionary perspective highlights the importance of addressing underlying emotional and social needs that addictive behaviors may temporarily satisfy. If substance use or behavioral addictions represent maladaptive attempts to meet evolved needs for social connection, status, or stress regulation, then sustainable recovery requires developing adaptive alternatives that address these same fundamental needs.\nConclusion The evolutionary perspective offers a framework for understanding the behavioral and neural predispositions underlying addiction vulnerability. By conceptualizing addiction as resulting from the interaction between evolved psychological mechanisms and novel environmental conditions, this approach integrates findings across levels of analysis \u0026mdash; from molecular neuroscience to population-level patterns.\nThis framework helps explain why neurotypical individuals develop addictive patterns despite harmful consequences, why certain substances and behaviors are more likely to produce addiction than others, and why individuals differ in vulnerability. Moreover, it suggests new directions for prevention and treatment, focusing on reducing evolutionary mismatch rather than simply targeting symptoms.\nFuture research would benefit from more explicit integration of evolutionary theory into addiction science, including: testing specific hypotheses derived from evolutionary models of addiction vulnerability; investigating cross-cultural patterns in addiction to distinguish universal vulnerabilities from culturally specific manifestations; and developing and evaluating prevention and treatment approaches that leverage evolutionary insights.\nBy understanding addiction through this evolutionary lens, we gain not only theoretical clarity but also practical guidance for addressing one of society's most persistent health challenges.\nReferences Adinoff, B. (2004). Neurobiologic processes in drug reward and addiction. Harvard Review of Psychiatry, 12(6), 305\u0026ndash;320. https://doi.org/10.1080/10673220490910844\nAlcaro, A., Huber, R., \u0026amp; Panksepp, J. (2007). Behavioral functions of the mesolimbic dopaminergic system: An affective neuroethological perspective. Brain Research Reviews, 56(2), 283\u0026ndash;321. https://doi.org/10.1016/j.brainresrev.2007.07.014\nGoodwin, B. C., Browne, M., \u0026amp; Rockloff, M. (2015). Measuring preference for supernormal over natural rewards. Evolutionary Psychology, 13(4). https://doi.org/10.1177/1474704915613914\nHunt, A., Merola, G. P., Carpenter, T., \u0026amp; Jaeggi, A. V. (2024). Evolutionary perspectives on substance and behavioural addictions: Distinct and shared pathways to understanding, prediction and prevention. Neuroscience \u0026amp; Biobehavioral Reviews, 159, 105603. https://doi.org/10.1016/j.neubiorev.2024.105603\nKalivas, P. W., \u0026amp; O\u0026rsquo;Brien, C. (2007). Drug addiction as a pathology of staged neuroplasticity. Neuropsychopharmacology, 33(1), 166\u0026ndash;180. https://doi.org/10.1038/sj.npp.1301564\nMerrer, J. L., Becker, J. a. J., Befort, K., \u0026amp; Kieffer, B. L. (2009). Reward processing by the opioid system in the brain. Physiological Reviews, 89(4), 1379\u0026ndash;1412. https://doi.org/10.1152/physrev.00005.2009\nO, J., Aspden, T., Thomas, A. G., Chang, L., Ho, M. R., Li, N. P., \u0026amp; Van Vugt, M. (2024). Mind the gap: Development and validation of an evolutionary mismatched lifestyle scale and its impact on health and wellbeing. Heliyon, 10(15), e34997. https://doi.org/10.1016/j.heliyon.2024.e34997\nRoth-Deri, I., Green-Sadan, T., \u0026amp; Yadid, G. (2008). β-Endorphin and drug-induced reward and reinforcement. Progress in Neurobiology, 86(1), 1\u0026ndash;21. https://doi.org/10.1016/j.pneurobio.2008.06.003\nWard, S., Smith, N., \u0026amp; Bowden-Jones, H. (2018). The use of naltrexone in pathological and problem gambling: A UK case series. Journal of Behavioral Addictions, 7(3), 827\u0026ndash;833. https://doi.org/10.1556/2006.7.2018.89\nZorrilla, E. P., Logrip, M. L., \u0026amp; Koob, G. F. (2014). Corticotropin releasing factor: A key role in the neurobiology of addiction. Frontiers in Neuroendocrinology, 35(2), 234\u0026ndash;244. https://doi.org/10.1016/j.yfrne.2014.01.001\n","permalink":"https://zags.dev/papers/addiction-behavioral-and-neural-predispositions/","summary":"This paper examines addiction through an evolutionary lens, proposing that both substance and behavioral addictions reflect the interaction between evolved neural mechanisms and modern environmental contexts.","title":"Addiction: Behavioral and Neural Predispositions - An Evolutionary Perspective"},{"content":"Abstract Communication lies at the heart of human interaction, influencing perceptions, emotions, and decisions in profound ways. This research investigates how different communication styles affect decision-making processes by shaping behavior, both consciously and subconsciously. Drawing from scholarly literature and a personal research study, the paper examines the influence of verbal and nonverbal cues such as tone, body language, and word choice on decision-making across various contexts. The findings reveal the critical role of adaptable communication strategies in enhancing interpersonal effectiveness, conflict resolution, and leadership development.\nKeywords: communication styles, decision-making, behavior, nonverbal cues, interpersonal effectiveness\nVerbal and Nonverbal Communication: Pathways to Decision-Making Influence Interpersonal communication is fundamental to human existence, shaping the fabric of our personal relationships, professional interactions, and societal structures. The way we communicate through words, gestures, tone, and expressions can significantly influence how others perceive us and respond to our messages. This study focuses on understanding the effects of different communication styles on behavior, particularly in the context of decision-making. The central research question guiding this investigation is: How do different communication styles influence decision-making by affecting behavior? This question is not only academically significant but also practically relevant, as the ability to influence decisions through effective communication is a valuable skill in leadership, negotiation, and everyday interactions. By exploring how elements such as tone, nonverbal cues, and language choice impact decisions, this research aims to provide insights that enhance both personal and professional relationships.\nLiterature Review Communication styles profoundly influence human behavior and decision-making processes across various contexts. This review synthesizes relevant scholarship across five key thematic areas: nonverbal communication and impression management, emotional intelligence and persuasion, cultural dimensions of communication, digital communication dynamics, and social learning perspectives.\nNonverbal Communication and Impression Management Nonverbal cues play a crucial role in shaping perceptions and influencing decisions, often conveying more information than spoken words. Burgoon, Guerrero, and Floyd (2016) demonstrate how facial expressions, gestures, posture, and eye contact create impressions of confidence, sincerity, or authority that guide decision-making processes. These findings are complemented by Mehrabian\u0026rsquo;s (1971) research, which indicates that a significant portion of emotional meaning is transmitted through nonverbal channels.\nThe strategic management of these nonverbal elements constitutes what Goffman (1959) terms \u0026ldquo;impression management\u0026rdquo;, where communication becomes a performance through which individuals consciously present themselves to influence perceptions. This sociological perspective reveals how people modulate their tone, language, and demeanor to project competence and credibility in professional settings. Leary and Kowalski\u0026rsquo;s (1990) Impression Management Model further elaborates that people engage in such behavior not only to influence perceptions but also to achieve specific social outcomes like gaining approval or avoiding conflict.\nThe microelements of communication are explored in Van Edwards (2022) work on \u0026ldquo;cues\u0026rdquo; \u0026ndash; subtle nonverbal behaviors and vocal nuances that enhance charisma and influence. Her research suggests that these often unconscious signals significantly affect message reception and subsequent decision-making. Charismatic communicators skillfully deploy these micro-cues to build rapport, establish trust, and persuade effectively.\nEmotional Intelligence and Persuasion The emotional underpinnings of communication significantly impact its effectiveness in influencing decisions. Goleman\u0026rsquo;s (2006) research on social intelligence emphasizes how emotional awareness and empathy enhance communication by enabling individuals to interpret others emotions and adjust their style accordingly. This ability to \u0026lsquo;read\u0026rsquo; others strengthens relationships and increases one\u0026rsquo;s capacity to influence decisions by addressing the emotional components of human behavior.\nPersuasive communication is systematically explored in Cialdin\u0026rsquo;s (2009) work, which identifies six key principles \u0026ndash; reciprocity, commitment, social proof, authority, liking, and scarcity \u0026mdash; that operate through specific communication strategies. For instance, authority can be reinforced through confident speech and assertive body language, while social proof relies on verbal cues highlighting consensus.\nThe cognitive processing of persuasive messages is addressed by Petty and Cacioppo\u0026rsquo;s (1986) Elaboration Likelihood Model, which distinguishes between central processing (careful consideration of content) and peripheral processing (reliance on superficial cues like speaker credibility). This framework helps explain how communication styles can activate different processing routes, affecting persuasion outcomes and decision-making.\nCultural Dimensions of Communication Communication styles vary significantly across cultures, with profound implications for decision-making in diverse settings. Gudykunst (2004) emphasizes how cultural backgrounds influence communication preferences and interpretations, where what appears assertive in one culture might seem aggressive in another. This cultural lens is essential when analyzing how communication styles affect decisions, particularly in multicultural environments where misinterpretations commonly occur.\nHall\u0026rsquo;s (1976) distinction between high-context and low-context cultures provides further insight into these variations. High-context cultures rely heavily on implicit communication, context, nonverbal cues, and shared experiences, while low-context cultures prioritize explicit, direct verbal communication. Understanding these differences is vital for effective cross-cultural communication and decision-making.\nDigital Communication Dynamics The evolution of communication in digital environments presents unique challenges and opportunities. Dhawan\u0026rsquo;s (2021) research on digital body language highlights how digital communication, such as emails, texts, and virtual meetings, depends on written cues, punctuation, and response timing to convey tone and intent. Without traditional nonverbal cues, digital communication requires greater clarity and intentionality to avoid misunderstandings and maintain influence.\nWalther\u0026rsquo;s (1996) Social Information Processing Theory complements this perspective by suggesting that individuals adapt their communication strategies in computer-mediated environments to compensate for the absence of nonverbal cues. Over time, these adaptations can foster meaningful relationships and effective decision-making, even in virtual settings.\nSocial Learning and Communication Development Bandura\u0026rsquo;s (1977) Social Learning Theory provides insight into how communication styles develop through observation and modeling. Individuals learn effective communication strategies by observing others, particularly those in influential positions, and then replicating these behaviors. This theory underscores the importance of role models and social environments in shaping communication styles that effectively influence decision-making.\nIntegration Collectively, these thematic areas provide a comprehensive framework for understanding how communication styles influence behavior and decision-making. By integrating insights from psychology, sociology, and communication studies, this literature review establishes a robust theoretical foundation for analyzing the complex interplay between communication approaches and their effects on interpersonal and professional outcomes.\nResearch Study Design Methodological Approach Drawing from the insights gained through the literature review, I designed a personal research study to observe how different communication styles influence decision-making in real-life scenarios. The study was conducted across various contexts, including professional meetings, peer collaborations, and informal social interactions, to capture a broad spectrum of responses.\nVariables and Communication Modifications The methodology involved systematically modifying specific elements of my communication style to assess their impact on others' decision-making. These modifications included variations in:\nTone (assertive versus empathetic)\nPacing of speech (fast versus slow)\nLanguage complexity (formal versus informal)\nNonverbal cues (eye contact, gestures, posture)\nFor example, in professional settings, I alternated between using a confident, authoritative tone and a more collaborative, empathetic approach to observe differences in how colleagues responded to suggestions and feedback.\nData Collection Procedures Data collection was conducted through detailed journaling, where I documented each interaction, the communication style employed, the context, and the observable reactions of the participants. This qualitative approach allowed for in-depth reflection on how different variables influenced outcomes. Additionally, I made note of any emotional responses, such as signs of agreement, hesitation, or resistance, to better understand the underlying behavioral drivers.\nAnalytical Framework The analysis involved identifying patterns and correlations between specific communication styles and decision-making outcomes. By comparing the effectiveness of different approaches across similar scenarios, I was able to draw conclusions about which communication strategies were most influential in guiding behavior, interpersonal effectiveness, conflict resolution, and leadership development.\nResearch Study Results Impact of Communication Assertiveness One of the most consistent findings was the effectiveness of assertive communication in professional environments. When presenting ideas with a confident, authoritative tone, accompanied by strong nonverbal cues such as upright posture and steady eye contact, I observed that colleagues were more likely to accept proposals without extensive questioning. This aligns with Cialdini\u0026rsquo;s principle of authority, suggesting that perceived confidence can enhance credibility and influence decisions.\nHowever, in more informal or emotionally charged situations, an assertive style sometimes led to defensiveness or resistance. In contrast, adopting an empathetic tone, characterized by active listening, softer vocal inflections, and open body language, fostered greater openness and collaboration. Participants were more willing to share their thoughts and consider alternative perspectives, highlighting the role of emotional resonance in decision-making.\nThe Role of Nonverbal Communication Nonverbal cues emerged as powerful influencers regardless of the context. Positive body language, such as nodding, maintaining appropriate eye contact, and using gestures to emphasize points, consistently enhanced the clarity and persuasiveness of my messages. Conversely, closed body language, like crossed arms or limited facial expressions, often created barriers to effective communication, even when the verbal content was strong.\nDigital Communication Effectiveness In digital communication scenarios, the absence of traditional nonverbal cues required greater attention to written language. Consistent with Dhawan\u0026rsquo;s findings, I discovered that concise, well-structured messages with thoughtful punctuation improved clarity and elicited more prompt and positive responses. Conversely, vague or overly complex messages tended to result in delays or misunderstandings, underscoring the need for digital communicators to be intentional in their word choice and formatting.\nCultural Considerations in Communication The study also highlighted the importance of cultural sensitivity. Interactions with individuals from diverse cultural backgrounds revealed that communication styles need to be adjusted to accommodate different norms and expectations. For instance, while directness was appreciated in some contexts, it was perceived as blunt or disrespectful in others. Adapting my approach to align with cultural preferences improved communication effectiveness and decision-making outcomes.\nDiscussion Professional Communication and Leadership The findings of this study reinforce the critical role of communication styles in shaping decision-making processes, especially when viewed through the lens of real-world communication challenges. In professional settings, for example, the ability to assert oneself clearly and confidently often determines success in leadership roles, negotiations, and team management. However, over-reliance on an assertive style without consideration of the emotional state of the audience can lead to resistance or conflict. This dynamic is frequently observed in workplace environments where hierarchical power structures exist. Leaders who fail to balance authority with empathy may struggle to inspire genuine collaboration, highlighting the importance of adaptable communication strategies.\nConflict Resolution and Interpersonal Effectiveness In interpersonal relationships, especially during conflict resolution, the study\u0026rsquo;s insights into empathetic communication are particularly relevant. Real-world conflicts often escalate not because of the issues themselves but due to poor communication. Active listening, validation of emotions, and non-confrontational language can de-escalate tensions and foster mutual understanding. This aligns with Goleman\u0026rsquo;s emphasis on emotional intelligence as a key factor in effective interpersonal interactions.\nDigital Communication Challenges The challenges of digital communication present another layer of complexity. In remote work environments, where emails and virtual meetings dominate, the absence of traditional nonverbal cues can lead to misunderstandings and reduced rapport. My findings underscore the necessity of clear, concise language and the strategic use of digital \u0026ldquo;cues\u0026rdquo; such as timely responses, thoughtful punctuation, and even emojis when appropriate to convey tone. This is particularly critical in global business contexts, where cultural differences further complicate digital interactions.\nCross-Cultural Communication Competence Cultural diversity itself poses a significant communication challenge in today\u0026rsquo;s interconnected world. Misinterpretations arising from differing communication norms can hinder collaboration in multicultural teams. For instance, while directness is valued in some cultures, it may be perceived as rude in others. The ability to recognize and adapt to these differences, what Gudykunst refers to as intercultural competence, is essential for effective global communication.\nImplications for Communication Practice Ultimately, the study highlights that effective communication is not just about transmitting information but about creating connections. Whether in leadership, personal relationships, or digital platforms, the principles of adaptability, emotional intelligence, and cultural sensitivity are key to influencing decisions and fostering meaningful interactions in the real world.\nLimitations and Future Research Directions While this study provides valuable insights into communication styles and decision-making, several limitations should be acknowledged. The qualitative nature of the observations and the personal involvement of the researcher introduce potential bias. Future research could benefit from more structured methodologies, larger sample sizes, and the incorporation of quantitative measures to validate these findings. Additionally, longitudinal studies examining how communication styles evolve and impact decision-making over time would provide deeper understanding of these dynamics.\nTheoretical Implications Communication theories continue to evolve as our understanding of human interaction deepens. This study contributes to theoretical development by demonstrating the interconnectedness of various communication elements \u0026ndash; verbal, nonverbal, emotional, and cultural \u0026ndash; in influencing decision-making processes. The findings suggest that existing theories might benefit from more integrated approaches that consider these elements holistically rather than in isolation.\nPractical Applications The insights from this research have several practical applications across different domains:\nProfessional Development Organizations can enhance leadership effectiveness by providing training in adaptive communication styles. Programs that develop awareness of how tone, body language, and cultural sensitivity affect decision-making could improve managerial performance and team cohesion.\nEducational Settings Educators can incorporate these findings into communication curricula, helping students develop versatile communication skills that will serve them in diverse professional and personal contexts. Teaching students to recognize and adapt to different communication needs could enhance their future effectiveness.\nConflict Mediation Mediators and counselors might apply these insights to develop more effective intervention strategies. Understanding how communication styles can either escalate or de-escalate tensions provides valuable tools for conflict resolution professionals.\nDigital Communication Design Designers of digital communication platforms could use these findings to develop features that enhance clarity and reduce misunderstandings in virtual environments. This might include better integration of visual cues or tools that help users adapt their communication style to different cultural contexts.\nConclusion This research demonstrates the profound impact that communication styles have on decision-making processes across various contexts. By understanding how verbal and nonverbal elements influence perceptions and behaviors, individuals can develop more effective strategies for navigating professional environments, interpersonal relationships, and cross-cultural interactions. The ability to adapt one\u0026rsquo;s communication style, such as balancing assertiveness with empathy or adding clarity with cultural sensitivity, emerges as a crucial skill in our increasingly complex and interconnected world.\nAs communication continues to evolve, particularly in digital environments, the need for intentional, adaptable communication strategies becomes even more essential. By cultivating awareness of how our communication affects others decision-making, we can foster more productive, harmonious, and effective interactions in all aspects of life.\nReferences Burgoon, J. K., Guerrero, L. K., \u0026amp; Floyd, K. (2016). Nonverbal communication (2nd ed.). Routledge.\nCialdini, R. B. (2009). Influence: The psychology of persuasion. Harper Business.\nDhawan, E. (2021). Digital body language: How to build trust and connection, no matter the distance. St. Martin\u0026rsquo;s Press.\nGoffman, E. (1959). The presentation of self in everyday life. Anchor Books.\nGoleman, D. (2006). Social intelligence: The new science of human relationships. Bantam Books.\nGudykunst, W. B. (2004). Bridging differences: Effective intergroup communication (4th ed.). Sage Publications.\nKnapp, M. L., \u0026amp; Hall, J. A. (2010). Nonverbal communication in human interaction (7th ed.). Wadsworth.\nVan Edwards, V. (2022). Cues: Master the secret language of charismatic communication. Portfolio.\n","permalink":"https://zags.dev/papers/verbal-nonverbal-decision-influence/","summary":"This research investigates how different communication styles affect decision-making processes by shaping behavior, both consciously and subconsciously. It examines the influence of verbal and nonverbal cues such as tone, body language, and word choice on decision-making across various contexts, revealing the critical role of adaptable communication strategies in enhancing interpersonal effectiveness, conflict resolution, and leadership development.","title":"Verbal and Nonverbal Communication: Pathways to Decision-Making Influence"},{"content":"While setting up some infrastructure as code (IaC) for a recent project, I found myself questioning a practice that I\u0026rsquo;ve been following. As I prepared my Terraform (OpenTofu) and Ansible configurations to provision a virtual private server (VPS), a seemingly simple question emerged: should the approach I use for my bare metal servers be the same for cloud infrastructure?\nIntroduction: The Ansible Access Dilemma For a while, my standard approach has been creating a dedicated Ansible user via cloud-init with sudo permissions. This method follows industry best practices and is well-documented across countless DevOps resources.\nHowever, when provisioning single-purpose VPS instances that might remain relatively static after initial setup, I began wondering: is this additional complexity always justified? Would it be simpler and potentially even more secure to just run Ansible as root?\nAfter all, using root would simplify the VPS configuration pipeline \u0026ndash; no need for cloud-init user provisioning, no waiting for user creation before running Ansible, and direct SSH key management through Terraform. But would this convenience come at the cost of security, or could it actually reduce risk by eliminating an additional accessible account?\nThis led me to consider the \u0026ldquo;provision then lock the door behind you\u0026rdquo; approach in my infrastructure management. Let\u0026rsquo;s explore both approaches to understand their implications for different scenarios.\nRunning Ansible as Root: Simplicity vs. Convention The root approach offers immediate appeal for streamlined deployments. With direct root access, we can eliminate several steps from our provisioning workflow:\nNo cloud-init configuration for user creation No potential race conditions where Ansible runs before user setup completes Simpler SSH key management via Terraform Fewer moving parts in the overall automation pipeline Additionally, when working with ephemeral infrastructure like single-purpose, short-lived VPS instances, having one less account with SSH access could theoretically reduce the attack surface.\nHowever, this approach comes with some criticisms that warrant investigation.\nPotential Drawbacks of the Root Approach Security Vulnerabilities:\nViolates the principle of least privilege \u0026ndash; a cornerstone of security architecture Creates a single point of access with maximum system privileges Increases potential impact if credentials are compromised Makes it difficult to attribute actions to specific users in system logs Operational Risks:\nRemoves safeguards against destructive operations Increases the potential consequences of playbook errors or typos Eliminates permission-based verification before critical system changes Makes it easier to accidentally impact system stability Poor Compliance Alignment:\nConflicts with requirements in most security frameworks and compliance standards Creates challenges for audit trails in regulated environments Represents a deviation from established industry best practices A Closer Look at These Concerns I would argue that these concerns are overstated. After all, a typical Ansible user is generally configured with passwordless sudo access to everything, essentially making it equivalent to root from a permission perspective.\nSimilarly, from an audit perspective, all actions would be logged under a single user (either root or the Ansible user), so the traceability argument might seem weak at first glance.\nWhen following the \u0026ldquo;provision then lock\u0026rdquo; approach \u0026ndash; where root SSH access is disabled after initial configuration \u0026ndash; the security differences might appear even less significant.\nSo, is the dedicated Ansible user just unnecessary complexity? Not exactly.\nThe Case for a Dedicated Ansible User While a dedicated Ansible user might seem like added complexity for minimal security gain, there are several compelling reasons to maintain this separation of concerns, particularly in specific contexts.\nBenefits Beyond Simple Permission Differences Even when the Ansible user has sudo privileges, maintaining this separation provides important benefits:\nAlignment with Security Principles: Separating accounts by function reflects fundamental security design, establishing patterns that scale better as infrastructure grows.\nOperational Clarity: Having distinct users for distinct functions improves system organization and makes intentions clearer.\nGroundwork for Future Refinement: Starting with separated users makes it easier to implement more granular permissions later without architectural changes.\nConsistent Practices: Following standard practices makes your infrastructure more approachable for team members and ensures compatibility with common security tools and auditing processes.\nLong-term Infrastructure Considerations For servers that will have extended lifecycles and require ongoing management, the dedicated Ansible user becomes even more valuable. In these cases, maintaining persistent root access would be a significant security liability, making the separate user approach essential.\nFinding a Pragmatic Middle Ground After careful consideration, I\u0026rsquo;ve realized this isn\u0026rsquo;t necessarily an either/or decision. Context matters significantly, and different infrastructure may warrant different approaches.\nFor Ephemeral Infrastructure Truly ephemeral VPS instances \u0026ndash; those serving single purposes with short lifecycles that are regularly rebuilt \u0026ndash; may benefit from the simplicity of the root approach. In these cases, you can:\nPerform initial provisioning as root Configure and harden all services Disable root SSH access as the final step Implement application-specific service accounts for any ongoing processes This \u0026ldquo;provision and lock\u0026rdquo; method can work well for infrastructure that changes primarily through complete rebuilds rather than ongoing maintenance.\nFor Persistent Infrastructure For long-lived servers that require regular updates, maintenance, and evolution, the dedicated Ansible user remains the superior approach. These environments benefit from:\nScalable Permission Models: Start with broad permissions, then refine as operational patterns emerge Team-Friendly Architecture: Support multiple administrators with clear access patterns Reduced Operational Risk: Maintain safeguards during iterative changes Better Auditability: Separate automated from manual administrative actions The \u0026ldquo;Lock Behind You\u0026rdquo; Security Pattern Regardless of which approach you choose, implementing the \u0026ldquo;provision then lock\u0026rdquo; security pattern is essential. This approach involves:\nUsing privileged access for initial setup and configuration Implementing comprehensive security measures and service configurations Hardening SSH access through key-only authentication, non-standard ports, and IP restrictions Removing or severely restricting the initial privileged access paths Establishing narrow, purpose-specific access for future maintenance While this pattern works with either approach, it tends to integrate more naturally with the dedicated user model, particularly for infrastructure that requires ongoing management.\nContext-Sensitive Recommendations After analyzing both approaches in various scenarios, here are my practical recommendations:\nFor Ephemeral, Single-Purpose VPS Deployments The root approach can be justified when simplicity and rapid deployment are priorities Ensure proper \u0026ldquo;lock behind you\u0026rdquo; procedures are automated as part of the provisioning Document your reasoning for deviating from standard practice Consider the lifecycle \u0026ndash; if there\u0026rsquo;s any chance the \u0026ldquo;temporary\u0026rdquo; server might become permanent, use the dedicated user approach For Persistent, Long-Lived Infrastructure Implement the dedicated Ansible user approach Develop standardized cloud-init templates to streamline the process Consider implementing more granular sudo permissions based on specific operational needs Ensure proper key management and rotation for the Ansible user For Mixed Environments Maintain consistent practices across similar systems where possible Document exceptions and their justifications clearly Implement comprehensive access logging regardless of approach Regularly review access patterns and adjust as needed Remember that even when an Ansible user has full sudo rights, maintaining this separation reinforces good security hygiene and keeps your infrastructure aligned with standard practices \u0026ndash; making it more maintainable as your team or infrastructure grows.\nConclusion: Balancing Security and Practicality The choice between root and a dedicated Ansible user isn\u0026rsquo;t merely about technical security \u0026ndash; it\u0026rsquo;s about balancing operational efficiency with security best practices appropriate to your specific context.\nI\u0026rsquo;m still inclined to recommend the dedicated Ansible user approach for most scenarios, as the additional setup complexity is minimal when properly templated, and the alignment with security principles provides peace of mind. However, I recognize that for truly ephemeral infrastructure with automated rebuilds, the root approach can offer meaningful simplification with careful implementation.\nWhat matters most isn\u0026rsquo;t rigidly adhering to either approach, but making deliberate, informed choices based on your specific requirements \u0026ndash; and documenting those choices for future reference.\nFinal Recommendations Document Your Decision Process: Whichever approach you select, document your reasoning and ensure your team understands the security implications and rationale.\nStandardize Where Possible: Develop consistent practices across similar infrastructure to avoid confusion and operational mistakes.\nImplement Comprehensive Monitoring: Regardless of approach, ensure you have robust logging and monitoring to track all system access and changes.\nRegular Security Reviews: Periodically reassess your approach as your infrastructure evolves and security requirements change.\nRemember that infrastructure security is built in layers \u0026ndash; how Ansible connects to your servers is just one component of a comprehensive security posture. The most important factor is making deliberate, informed choices rather than defaulting to convenience without consideration.\nWhere to Go From Here If you\u0026rsquo;re looking to enhance your Ansible security posture further, consider these next steps:\nImplement role-based access control by creating more granular sudo permissions for your Ansible user based on specific task requirements. This addresses the audit and traceability challenges while maintaining operational efficiency.\nConsider implementing bastion hosts as secure entry points to your infrastructure, adding another layer of security regardless of whether you\u0026rsquo;re using root or a dedicated user for Ansible operations.\n","permalink":"https://zags.dev/posts/ansible-as-root-or-user/","summary":"This post explores the pros and cons of running Ansible as root or with a privileged user account, and provides practical recommendations for different scenarios.","title":"Root or User? Design Considerations for Ansible in IaC"},{"content":"Well folks, buckle up for a story that wasn\u0026rsquo;t supposed to be a story at all. What started as a simple \u0026ldquo;let me document my homelab migration\u0026rdquo; turned into a fascinating journey through the wonderland of Ansible dependency management. And by fascinating, I mean \u0026ldquo;why is this so unnecessarily complicated?\u0026rdquo;\nA Bit of History (Because Context is Everything) Let\u0026rsquo;s travel back to the simpler times, pre-2020, before Ansible decided we all needed a more \u0026ldquo;sophisticated\u0026rdquo; approach to role management. Back then, life was straightforward: one repo per role, upload to Ansible Galaxy, done. Need to use a role? Just add namespace.role to your requirements.yaml, run ansible-galaxy role install -r requirements.yaml, and you\u0026rsquo;re off to the races.\nThese requirements.yaml files were actually quite nice - you could name your roles (shocking, I know), pull from sources other than Ansible Galaxy, and generally live a peaceful existence. This process worked perfectly fine when pulling in a few remote roles for a playbook.\nBut oh, you want to do anything beyond that? Well\u0026hellip; grabs popcorn\nThe Problems I Ran Into (Or: How I Learned to Stop Worrying and Love the Pain) Picture this: there I was, innocently setting up an Ansible role to install a DHCP server for my new homelab. It turned out my DHCP role needed EPEL (Extra Packages for Enterprise Linux) to install the DHCP package. Being the good developer I am (pat on the back), I thought, \u0026ldquo;Let\u0026rsquo;s keep things clean and follow single responsibility principles!\u0026rdquo; My ansible-role-dhcp role needed EPEL. No problem, right? I already had a role for that!\nOh, isn\u0026rsquo;t that adorable. Past me was so naive.\nAnd this is where things got interesting.\nAnsible Role Dependency Handling: A Comedy of Errors So, Ansible roles have this thing called meta/main.yaml for handling dependencies. Great! Problem solved, right?\nHa. Ha\u0026hellip; If only.\nI thought it would be as simple as:\n1 2 3 dependencies: - name: syaghoubi00.epel src: https://github.com/syaghoubi00/ansible-role-epel Spoiler alert: it wasn\u0026rsquo;t.\nProblem #1: You Can\u0026rsquo;t Name Roles Within a Dependency Because obviously, why would you want to do something so logical? I quickly discovered you can\u0026rsquo;t name roles within a dependency declaration. Instead, Ansible Galaxy decides to install the dependency as ansible-role-epel, dropping any namespaces and stubbornly naming the role as the name of the repo from the src. This broke the expected inclusion pattern of syaghoubi00.epel.\nBeautiful.\nProblem #2: Ansible Galaxy is Currently Broken \u0026ldquo;Well,\u0026rdquo; I thought, \u0026ldquo;I\u0026rsquo;ll just upload it to Ansible Galaxy!\u0026rdquo;\nBut it wasn\u0026rsquo;t that simple.\nTurns out, since the migration to Ansible Galaxy NG (because we definitely needed that), where collections became the new golden child, individual roles have been demoted to \u0026ldquo;legacy\u0026rdquo; status. And boy, does that \u0026ldquo;legacy\u0026rdquo; status shine through in the bug count. Want to upload roles from a different branch? Good luck with that!\nProblem #3: The Pre-requisite Dance \u0026ldquo;Fine\u0026rdquo; I muttered, \u0026ldquo;I\u0026rsquo;ll drop the namespaces and just use a dependency.\u0026rdquo;\nBut wait! There\u0026rsquo;s more! The meta/main.yaml dependency logic decides to run dependencies before all other tasks. Because apparently, flexibility is overrated. Need to include that role in the middle of your tasks? Too bad!\nThis rigid ordering doesn\u0026rsquo;t work when you need to include the dependent role\u0026rsquo;s tasks at a specific point within your role\u0026rsquo;s execution.\nProblem #4: The Requirements That Aren\u0026rsquo;t Required Deep in the documentation (where hope goes to die), there are mentions of a meta/requirements.yaml. Perfect, right? All the abilities of a normal requirements.yaml, with the bonus of being able to name roles!\nExcept\u0026hellip; it doesn\u0026rsquo;t resolve dependencies automatically. Because of course it doesn\u0026rsquo;t.\nAfter burning through four different approaches and watching each one fail spectacularly, I had to face the truth: maybe, just maybe, there was a reason Ansible was pushing everyone toward collections. Not that I was happy about it, but at least I understood the why of it all.\nSubmit to the Ansible Collection Paradigm (Or: Resistance is Futile) At this point, you might be thinking, \u0026ldquo;Geez, that seemed like a lot of work just to avoid using a collection.\u0026rdquo;\nYes. Yes, it was. But let me tell you why I was trying to avoid collections in the first place.\nCollections: The Giant Monorepo Nobody Asked For Collections are essentially giant monorepos. Sure, I tried to work around this with git submodules. I even tried using a requirements file and installing the roles with git sources to the repo using a custom path. But both approaches just led to more mess and complexity.\nWant to manually manage a changelog and try to manually do semver bumping? Hope you enjoy pain! With a monorepo, this could be automated using commit history, but good luck doing that with the submodules or requirements approaches.\nWhy I\u0026rsquo;ve Been Avoiding the Collections Monorepo Here\u0026rsquo;s a fun question: at what point does a monorepo become too much to manage? 20+ roles? 50? And what if I only need a single role - now I have to pull in the entire collection? That seems\u0026hellip; efficient.\nCollections aren\u0026rsquo;t without their drawbacks:\nMonorepo Management: As the number of roles grows, repository management becomes more complex. All-or-Nothing Updates: Changes to a single role affect the entire collection\u0026rsquo;s versioning. Loss of Individual Role Versioning: You can\u0026rsquo;t tag releases for individual roles anymore. The Silver Lining (Because We Need Something Positive) After all this complaining (some of it justified, I might add), I have to admit there are a few advantages to collections (but don\u0026rsquo;t tell Ansible I said this).\nThe Good Parts One command to rule them all: ansible-galaxy collection install syaghoubi00.homelab Efficient distribution thanks to tarball compression (I would test the size differences, but since I\u0026rsquo;m stuck with collections anyway, why bother?) Automated version management Making the Best of It If you\u0026rsquo;re going to be forced into the collections world, here\u0026rsquo;s how to make it less painful:\nGroup related roles into collections based on their purpose (e.g., homelab, security, monitoring) Start any new projects with collections rather than individual roles - trust me, future you will be grateful Embrace the monorepo workflow (if you can\u0026rsquo;t beat \u0026rsquo;em, join \u0026rsquo;em) Conclusion So here we are, at the end of this unexpected journey through Ansible dependency management. What started as a simple homelab role turned into a deep dive into the evolution of Ansible\u0026rsquo;s tooling - from the simplicity of individual roles to the \u0026ldquo;sophisticated\u0026rdquo; world of collections.\nAm I happy about migrating all my individual role repos to a giant collections repo? Not particularly. Did I have much choice? Also not particularly. But that\u0026rsquo;s the thing about working with tools in the ever-evolving landscape of DevOps - sometimes you have to adapt to the platform\u0026rsquo;s vision, even when that vision feels like it\u0026rsquo;s complicating your life.\nFor those still clinging to individual role repositories (I see you, and I understand), start planning your migration to collections now. The writing is on the wall - the \u0026ldquo;legacy\u0026rdquo; status of individual role management in Ansible Galaxy isn\u0026rsquo;t just a label, it\u0026rsquo;s a glimpse into your future headaches if you don\u0026rsquo;t adapt.\nThe moral of this story? Maybe it\u0026rsquo;s that everything eventually becomes a monorepo. Or perhaps it\u0026rsquo;s that dependency management is just universally painful across all technologies. But more likely, it\u0026rsquo;s that sometimes the path of least resistance is to accept that fighting against the tool\u0026rsquo;s preferred patterns is more painful than adapting to them - even if those patterns seem unnecessarily complicated.\nChoose your own moral. I\u0026rsquo;ll be here, migrating my roles to collections and pretending this is fine.\nThis is fine.\n\u0026hellip;sips coffee while staring at terminal\u0026hellip;\n","permalink":"https://zags.dev/posts/ansible-dependency-problems/","summary":"While working on some Ansible roles for my Day-1 Homelab Ops, I\ndiscovered that managing dependencies in Ansible isn\u0026rsquo;t as straightforward as\none might expect.","title":"The Unexpected Mess of Ansible Dependency Management"},{"content":" Why is this commit message just \u0026lsquo;fix stuff\u0026rsquo;?\nIf you\u0026rsquo;ve ever muttered these words while reviewing code, you\u0026rsquo;re not alone. While developers use Git commits daily, many overlook their importance as a communication tool.\nI recently explored ways to make them more meaningful - and discovered some surprising performance implications worth sharing.\nWhy Care About Commit Formatting? The Problem with Unstructured Commits Consider these two commit histories:\n1 2 3 feat: add user authentication system fix: resolve password reset token expiration docs: update API authentication examples versus:\n1 2 3 updated stuff fixed the thing more changes The first history tells a clear story: new authentication features were added, a bug in password resets was fixed, and documentation was updated. The second leaves us guessing - which \u0026ldquo;stuff\u0026rdquo; was updated? What \u0026ldquo;thing\u0026rdquo; was fixed?\nClear Communication Well-structured commits serve as documentation, telling the story of your project\u0026rsquo;s evolution. They help team members (including future you) understand:\nWhat changed and why The scope and impact of changes Whether updates might introduce breaking changes How features evolved over time Practical Benefits I first realized the importance of structured commits while using Neovim with lazy.nvim for plugin management. During updates, I found myself reviewing changelogs regularly. Projects following conventional commit standards made this process significantly more efficient - changes were clearly categorized, making it easy to:\nIdentify new features Spot potential breaking changes Understand bug fixes Assess update impact No more getting lost in a sea of random commit messages.\nUnderstanding Conventional Commits The Conventional Commits specification provides a standardized format for commit messages.\nThe basic structure looks like this: \u0026lt;type\u0026gt;(scope): message\nFor example: feat(blog): add comment system\nCommon types include:\nfeat: New features Example: feat(ui): add dark mode support fix: Bug fixes Example: fix(api): handle null response from user service docs: Documentation changes Example: docs(readme): clarify installation steps chore: Maintenance tasks Example: chore(deps): update left-pad to 1.30 The scope, while optional, is super helpful for categorizing changes in bigger projects. For example, a web application might use scopes like api, auth, ui, or db.\nThis structured format makes it easier to:\nAutomatically generate changelogs Determine semantic version bumps Parse and understand commit history Maintain consistency across teams Semantic Versioning Made Easy One of the most compelling reasons to use conventional commits is how they simplify semantic versioning.\nSemantic versioning (MAJOR.MINOR.PATCH) follows these rules:\nMAJOR version increments for breaking changes MINOR version increments for new features PATCH version increments for bug fixes With conventional commits, determining version bumps becomes programmatic:\nfeat!: or fix!: commits trigger MAJOR version bumps feat: commits trigger MINOR version bumps fix: commits trigger PATCH version bumps Example:\n1 2 3 4 5 6 7 8 fix(cache): resolve memory leak -\u0026gt; 1.0.1 (PATCH bump) feat(search): add fuzzy matching -\u0026gt; 1.1.0 (MINOR bump) feat!(auth): switch to OAuth2 only -\u0026gt; 2.0.0 (MAJOR bump - breaking change) Tools like release-please from Google can automatically:\nParse your conventional commits Generate appropriate version numbers Create changelogs Generate release notes Create release pull requests This automation eliminates manual version management and reduces human error in the release process.\nAutomating Commit Standards Rather than relying on self-regulation to follow the Conventional Commits spec (and trying to remember all the conventions, because who needs that extra mental load?), we can leverage Git hooks to enforce these standards automatically. Git hooks are scripts that run at specific points in Git\u0026rsquo;s execution cycle. Atlassian has a great explanation if you want to dive deeper into how hooks work.\nTools Overview There are two key components used to automate commit message standards with git hooks:\nA git hook manager A commit message linter Git Hook Managers While Git hooks are powerful, managing them directly can be cumbersome:\nHooks aren\u0026rsquo;t versioned by default Hook scripts need manual installation and updates Different projects might need different hook configurations Hook dependencies need manual management Hook managers solve these problems by:\nVersioning hooks alongside your code Providing declarative configuration Automatically managing hook dependencies Enabling easy sharing of hook configurations Supporting multiple programming languages and tools Offering a plugin ecosystem for common tasks There are two popular options for managing hooks:\npre-commit - A Python-based framework\nA simple pip install pre-commit is all you need Husky - A JavaScript-based framework\nInstalls with npm install --save-dev husky Both options install easily, as pip and npm are found on most systems; and it is generally pretty painless to add them if they aren\u0026rsquo;t.\nThere may be some edge cases for each option, but for general use cases they seem reasonably equivalents. It\u0026rsquo;s worth noting that I did find husky a bit easier to configure post-install.\nCommit Linters For linting commit messages, I explored two options:\ncommitlint - The popular choice, especially in node land. It\u0026rsquo;s mature and well-documented.\nPros: Offers comprehensive rule configuration Has extensive documentation and examples Has prettier output formatting, though this is largely cosmetic Cons: Requires node ecosystem Depends on husky for hook installation and no pre-commit support Provides plugins for various workflows conform - Created by Sidero Labs, the company behind Talos (of Kubernetes fame), is a Go-based tool that offers more features than just commit linting:\nPros: Installs as a single binary Fewer dependencies Many useful built-in policies: Conventional Commits GPG signatures License headers Spell checks Cons: Still in alpha release phase The config documentation is lacking Performance Benchmarks I was curious how conform would perform against the commitlint setup, since it is written in Go and lacks the dependence on a hook manager.\nPerformance might not seem that important at first, until you consider that this is going to run every time you try to make a commit.\nGit hooks often get a bad reputation because people load up on hooks and every commit ends up taking a long time to complete. When making commits frequently, this gets extremely annoying very quickly.\nTest Setup While far from a comprehensive test, I setup a quick test for each tool.\nCreate a temporary directory Install the tools Add a basic config Run hyperfine (a benchmarking tool) with pass/fail commands Copy the results.md to this post. Since the temporary directory is inside /tmp/, which is mounted as a tmpfs, there shouldn\u0026rsquo;t be a concern about storage bottlenecking.\nSome more machine details for the curious:\nOS: Fedora 41 CPU: AMD EPYC 7302 (16 cores - 3GHz) RAM: 128GB DDR4-2666 git version 2.47.1 conform version v0.1.0-alpha.30 (43d9fb6d) husky@9.1.7 @commitlint/cli@19.6.1 Test Scripts:\n\u0026#x1f5d2;\u0026#xfe0f; Note\nThe proper install method is covered below in Tool Install if you are curious about install instructions.\nHusky + commitlint 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 ## Set up test repo tmp_dir=$(mktemp -d /tmp/husky_commitlint.XXX) cd \u0026#34;$tmp_dir\u0026#34; \u0026amp;\u0026amp; git init ## Install tools npm install --save-dev husky @commitlint/{cli,config-conventional} ### Configure the repo to use Husky npx husky ### Configure Husky to use commitlint echo \u0026#34;npx --no -- commitlint --edit \\$1\u0026#34; \u0026gt;.husky/commit-msg ## Setup commitlint cat \u0026lt;\u0026lt;-EOF \u0026gt;.commitlintrc.yaml extends: - \u0026#34;@commitlint/config-conventional\u0026#34; EOF ## Benchmark fail and pass cases hyperfine --export-markdown results-fail.md --time-unit millisecond --ignore-failure \u0026#39;git commit --allow-empty -m \u0026#34;fail\u0026#34;\u0026#39; hyperfine --export-markdown results-pass.md --time-unit millisecond \u0026#39;git commit --allow-empty -m \u0026#34;fix: fixed thing\u0026#34;\u0026#39; Conform 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 ## Set up test repo tmp_dir=$(mktemp -d /tmp/conform.XXX) cd \u0026#34;$tmp_dir\u0026#34; \u0026amp;\u0026amp; git init ### Install commit-msg hooks cat \u0026lt;\u0026lt;EOF | tee .git/hooks/commit-msg #!/bin/sh $tmp_dir/conform-linux-amd64 enforce --commit-msg-file \\$1 EOF chmod +x .git/hooks/commit-msg ## Install tools ### linux-amd64 wget -qO- https://api.github.com/repos/siderolabs/conform/releases | jq -r \u0026#39;.[].tag_name\u0026#39; | sed -n 1p | xargs -I {} wget -qO- https://api.github.com/repos/siderolabs/conform/releases/tags/{} | jq \u0026#39;.assets.[] | select(.name == \u0026#34;conform-linux-amd64\u0026#34;) | .browser_download_url\u0026#39; | xargs -I {} wget -q {} \u0026amp;\u0026amp; chmod +x conform-linux-amd64 ## Setup conform cat \u0026lt;\u0026lt;-EOF \u0026gt;.conform.yaml policies: - type: commit spec: conventional: type: EOF ## Benchmark fail and pass cases hyperfine --export-markdown results-fail.md --time-unit millisecond --ignore-failure \u0026#39;git commit --allow-empty -m \u0026#34;fail\u0026#34;\u0026#39; hyperfine --export-markdown results-pass.md --time-unit millisecond \u0026#39;git commit --allow-empty -m \u0026#34;fix: fixed thing\u0026#34;\u0026#39; Results Husky + commitlint:\nCommand Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fail\u0026quot; 1422 ± 0.14 1400 1439 1.00 Command Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fix: thing\u0026quot; 2338 ± 0.44 2285 2414 1.00 Conform:\nCommand Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fail\u0026quot; 11.4 ± 0.9 8.9 14.5 1.00 Command Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fix: thing\u0026quot; 969.4 ± 5.7 966.4 985.5 1.00 \u0026#x1f92f; - I was expecting at least some improvement because of the lack of husky, and conform being a Go binary.\nBut a 100x speedup? nice.\nThe reduced speedup during successful commits is from git itself. But the 2x speedup is still pretty significant from a user standpoint. A one second difference is definitely noticeable.\nHook Manager Overhead or Programming Language? Now my curiosity was piqued. How much of the slowdown was from the husky hook manager overhead and how much was from the fact conform is written in Go? I decided to set up conform using pre-commit and husky to see what might be causing the slowdown.\nConform + pre-commit:\nCommand Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fail\u0026quot; 357.2 ± 7.3 344.6 367.0 1.00 Command Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fix: thing\u0026quot; 1258 ± 0.19 1238 1298 1.00 Conform + Husky:\nCommand Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fail\u0026quot; 639.1 ± 3.8 631.5 644.0 1.00 Command Mean [ms] Min [ms] Max [ms] Relative git commit --allow-empty -m \u0026quot;fix: thing\u0026quot; 1541 ± 0.16 1519 1570 1.00 \u0026#x1f914; Interesting. The hook manager is definitely adding some overhead and the programming language is certainly a factor.\nIt might be worth investigating some more hook managers for performance benefits. Maybe even make one?\nTest Conclusion The performance differences are striking:\nConform processes failed commits 100x faster than husky + commitlint Successful commits show a 2x speed improvement with conform Even when using a hook manager, conform outperforms commitlint significantly Fail Tests:\nConfiguration Mean [ms] Min [ms] Max [ms] Rel-Slowdown Conform 11.4 ± 0.9 8.9 14.5 0% pre-commit + Conform 357.2 ± 7.3 344.6 367.0 -3003% Husky + Conform 639.1 ± 3.8 631.5 644.0 -5506% Husky + commitlint 1422 ± 0.14 1400 1439 -12374% Pass Tests:\nConfiguration Mean [ms] Min [ms] Max [ms] Rel-Slowdown Conform 969.4 ± 5.7 966.4 985.5 0% pre-commit + Conform 1258 ± 0.19 1238 1298 -30% Husky + Conform 1541 ± 0.16 1519 1570 -59% Husky + commitlint 2338 ± 0.44 2285 2414 -141% Conform is the clear winner in terms of performance.\nTool Install While most of these commands will look familiar if you checked out the benchmark scripts, I wanted to add a more thorough install guide, now that you may have a better idea of what you might want to use.\nConform Unfortunately, since conform is marked as a pre-release, there isn\u0026rsquo;t a latest tag to grab. Anyone else having flashbacks to getting the Hugo binary?\nNo? Maybe it\u0026rsquo;s just me then \u0026#x1f62c;\nSo - we are going to have to do a little extra footwork to download the latest binary release.\nGetting the Pseudo-Latest Release 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 #!/bin/sh ## set list of releases as variable for reuse conform_releases=$(wget -qO- https://api.github.com/repos/siderolabs/conform/releases) ## parse the var containing the releases with `jq` for tag names ## should be sorted by latest, so use `sed` print the first line conform_pseudo_latest_tag=$(echo \u0026#34;$conform_releases\u0026#34; | jq -r \u0026#39;.[].tag_name\u0026#39; | sed -n 1p) ## set the name of our platform conform_binary_platform=\u0026#34;conform-linux-amd64\u0026#34; ## vars need to be exported for them to be available to `jq` export conform_pseudo_latest_tag conform_binary_platform ## get download url conform_download_url=$(echo \u0026#34;$conform_releases\u0026#34; | jq -r \u0026#39;.[] | select(.name == env.conform_pseudo_latest_tag) | .assets.[] | select(.name == env.conform_binary_platform) | .browser_download_url\u0026#39;) ## download binary and make executable conform_install_path=\u0026#34;$HOME/.local/bin/conform\u0026#34; wget -O \u0026#34;$conform_install_path\u0026#34; \u0026#34;$conform_download_url\u0026#34; \u0026amp;\u0026amp; chmod +x \u0026#34;$conform_install_path\u0026#34; ## cleanup env unset conform_pseudo_latest_tag conform_binary_platform Installing Conform Binary install method Get the conform binary:\nSee Getting the Pseudo-Latest Release above.\nCreate a conform config:\n.conform.yaml\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 policies: - type: commit spec: conventional: descriptionLength: 72 scopes: [\u0026#34;.*\u0026#34;] # Allow all scopes (regex) types: - build - chore - ci - docs - feat - fix - perf - refactor - revert - style - test header: case: lower imperative: true invalidLastCharacters: . length: 72 spellcheck: locale: US Add git commit-msg hook:\n.git/hooks/commit-msg\n1 2 3 #!/bin/sh conform enforce --commit-msg-file \u0026#34;$1\u0026#34; Make git hook executable:\nchmod +x .git/hooks/commit-msg\nInstalling with pre-commit Install pre-commit:\npip install pre-commit\nInitialize a project:\nmkdir example-project \u0026amp;\u0026amp; cd example-project \u0026amp;\u0026amp; git init\nCreate pre-commit config:\n.pre-commit-config.yaml\n1 2 3 4 5 6 7 repos: - repo: https://github.com/siderolabs/conform rev: main hooks: - id: conform stages: - commit-msg Install the hook with pre-commit:\npre-commit install --hook-type commit-msg\nCreate conform config:\n.conform.yaml\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 policies: - type: commit spec: conventional: descriptionLength: 72 scopes: [\u0026#34;.*\u0026#34;] # Allow all scopes (regex) types: - build - chore - ci - docs - feat - fix - perf - refactor - revert - style - test header: case: lower imperative: true invalidLastCharacters: . length: 72 spellcheck: locale: US \u0026#x1f4a1; Tip\nOnce pre-commit and conform are installed, this script can quickly configure a repo\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 #!/bin/sh ## Create pre-commit config cat \u0026lt;\u0026lt;EOF \u0026gt;.pre-commit-config.yaml # install with `pre-commit install -t commit-msg` repos: - repo: https://github.com/siderolabs/conform rev: main hooks: - id: conform stages: - commit-msg EOF ## Install pre-commit hooks pre-commit install --hook-type commit-msg ## Create conform config cat \u0026lt;\u0026lt;EOF \u0026gt;.conform.yaml policies: - type: commit spec: conventional: descriptionLength: 72 scopes: [\u0026#34;.*\u0026#34;] # Allow all scopes (regex) types: - build - chore - ci - docs - feat - fix - perf - refactor - revert - style - test header: case: lower imperative: true invalidLastCharacters: . length: 72 spellcheck: locale: US EOF Husky and Commitlint Run the following commands from within a git repo.\nhusky:\nLink to the official documentation\n1 2 3 npm install --save-dev husky npx husky commitlint:\nLink to the official documentation\n1 2 3 4 5 6 npm install --save-dev @commitlint/{cli,config-conventional} ## conventional commits spec ## can use others, such as `config-angular` ## just be sure to replace the package above too echo \u0026#34;export default { extends: [\u0026#39;@commitlint/config-conventional\u0026#39;] };\u0026#34; \u0026gt; commitlint.config.js Adding commitlint to husky:\n1 2 # Add commit message linting to commit-msg hook echo \u0026#34;npx --no -- commitlint --edit \\$1\u0026#34; \u0026gt; .husky/commit-msg Setting Up Git Hooks Automatically So - we have our tools installed, but one of the semi-annoying things about Git hooks is that they need to be set up for each repository. However, we can partially automate this process for new repositories using Git\u0026rsquo;s template directory feature.\nWe can automate hook setup for new repositories:\nCreate a template directory with your desired hooks Configure Git to use this template by default for new repositories Every new git init will automatically include your hook scripts \u0026#x2757; Important: Cloning a repo will still require a new install of hooks to that repo\n\u0026#x1f5d2;\u0026#xfe0f; Note\nIt is not possible to have files included in the repo with a template. This means no pre-populating a base config for the hooks. A workaround is to add an init.sh script that is manually executed post init, but this isn\u0026rsquo;t ideal.\nUsing a Git Template With Conform Create a template directory:\nmkdir -p ~/git-templates/conform/hooks\nDownload the conform binary:\nSee: Conform or grab it from siderolabs/conform\nPut the binary at ~/git-templates/conform/hooks/conform\n\u0026#x1f5d2;\u0026#xfe0f; Note\nThe commit-msg script below executes the binary conform inside the hooks directory, so make sure the binary isn\u0026rsquo;t named something like conform-linux-amd64 from when it was downloaded.\nAlternatively, adjust the commit-msg file to use a different executable name.\nAdd your commit-msg hook:\n~/git-templates/conform/hooks/commit-msg\n1 2 3 #!/bin/sh .git/hooks/conform enforce --commit-msg-file \u0026#34;$1\u0026#34; Make the hook executable:\nchmod +x ~/git-templates/conform/hooks/commit-msg\nOptional - make an init.sh:\n~/git-templates/conform/hooks/init.sh\n1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 #!/bin/sh cat \u0026lt;\u0026lt;EOF \u0026gt;.conform.yaml \u0026amp;\u0026amp; echo \u0026#34;Added .conform.yaml\u0026#34; policies: - type: commit spec: conventional: descriptionLength: 72 scopes: [\u0026#34;.*\u0026#34;] # Allow all scopes (regex) types: - build - chore - ci - docs - feat - fix - perf - refactor - revert - style - test header: case: lower imperative: true invalidLastCharacters: . length: 72 spellcheck: locale: US EOF Make the init.sh executable:\nchmod +x ~/git-templates/conform/hooks/init.sh\nTell git to use this template:\nOption 1 - specify a template for each git init:\ngit init --template=\u0026quot;$HOME/git-templates/conform\u0026quot;\nOption 2 - use a global template:\ngit config --global init.templateDir ~/.git-template/conform\nAdd a conform config to the repo post-init:\nFrom inside the new repo .git/hooks/init.sh (if added) or manually adding .conform.yaml\nLooking Forward: CI/CD Pipeline Integration While local commit hooks are valuable, moving some hooks to your CI/CD pipeline can significantly improve developer experience and enable more comprehensive checks.\nMoving Beyond Local Validation An extended local git hook toolset often slows down development by running hooks far beyond basic commit linting and code formatting. They can quickly become bloated with hooks for things like tests and builds on every commit.\nBenefits of Pipeline-Based Validation By offloading these tasks to your CI/CD pipeline, developers can:\nCommit changes quickly without waiting for checks Push to feature branches for comprehensive validation Faster local development cycles Consistent validation environment Comprehensive security scanning Automated policy enforcement Parallel execution of intensive tasks By moving extended validation to your CI/CD pipeline, developers can focus on writing code while still ensuring all necessary checks are performed thoroughly and consistently.\nConclusion Structured commits might seem like a small detail, but they significantly impact project maintenance and team collaboration. The performance improvements offered by modern tools remove traditional friction points, making it easier than ever to maintain high commit standards.\nStart with small steps - perhaps just categorizing commits as feat or fix. As you see the benefits in your workflow, gradually adopt more conventions. Remember, the goal isn\u0026rsquo;t perfection but better communication and automation in your development process.\n","permalink":"https://zags.dev/posts/improving-git-commits/","summary":"A deep dive into structuring and automating better Git commits","title":"Improving Git Commits"},{"content":"Abstract Insider trading represents a complex and critical challenge in financial markets, undermining investor confidence and market integrity. This research paper examines the legal frameworks, regulatory mechanisms, and significant cases that define insider trading regulations in the United States, highlighting the ongoing tension between corporate information access and fair market practices.\nKeywords: insider trading, securities regulation, legal frameworks\nSecurities Regulation: Insider Trading Description of the Issue Insider trading occurs when individuals with privileged access to Material Non-Public Information (MNPI) about a company trade securities based on that confidential knowledge. MNPI is a legal concept in securities regulation that describes confidential data with potential significance to investment decisions. Characterized by materiality, non-public status, and market relevance, MNPI includes information not widely available that a reasonable investor would consider important in evaluating a security. This encompasses corporate developments such as pending mergers, substantive financial performance changes, leadership transitions, potential litigation, technological innovations, or significant operational shifts. The practice fundamentally undermines the principles of fair and equal market participation by creating an uneven playing field where certain participants can gain unwarranted economic advantages, a challenge observed across international markets with varying regulatory effectiveness (Thompson, 2013).\nInsider trading presents complex ethical and legal challenges that strike at the fundamental principles of market fairness and investor trust. While some argue that insider trading could enhance market efficiency by accelerating the incorporation of information into stock prices, the practice erodes the fundamental trust that investors place in financial markets as demonstrated by empirical studies comparing agency and market theories of insider trading as well as global studies on the economic costs of insider trading (Beny, 2004; Bhattacharya \u0026amp; Daouk, 2002; Smith \u0026amp; Block, 2015). When corporate insiders or individuals with privileged information use their knowledge to make trading decisions, they create a systemic disadvantage for ordinary investors who lack such insider perspectives. This asymmetry of information distorts genuine market valuations, potentially manipulating stock prices and undermining the transparent price discovery mechanism that is critical to healthy financial markets, though proponents of a free market approach suggest insider trading may provide some efficiency benefits (Beny, 2004; Bhattacharya \u0026amp; Daouk, 2002; Smith \u0026amp; Block, 2015).\nThe complexity of insider trading extends beyond simple stock transactions, often involving mixed motives where individuals act on both permissible and impermissible information (Verstein, 2021). It encompasses a wide range of scenarios, from direct personal trading by corporate executives to more intricate schemes involving complex information-sharing networks, further complicated by cases where insider actions are driven by a mix of legitimate and illegitimate motives (Verstein, 2021). These sophisticated methods of information exploitation can take many forms, including direct trading, tipping information to third parties, or creating elaborate networks designed to circumvent existing regulatory frameworks.\nRelated Laws and Legal Sources The legal landscape of insider trading regulation in the United States represents a complex, dynamic system that has evolved through strategic legislative interventions responding to sophisticated financial challenges. Understanding the historical context and progressive development of these laws provides insight into the mechanisms designed to protect market integrity.\nThe Securities Exchange Act of 1934: Foundational Legislation The Securities Exchange Act of 1934 emerged as a direct response to the catastrophic market failures and widespread financial manipulation that characterized the 1929 stock market crash and subsequent Great Depression (Hohenstein, 2006). This landmark legislation established the Securities and Exchange Commission (SEC) as a powerful regulatory body with comprehensive oversight of securities markets, which has been shown to positively impact capital market efficiency when paired with effective enforcement (Christensen, Hail, \u0026amp; Leuz, 2011; Hohenstein, 2006). Section 10(b) and Rule 10b-5 became particularly instrumental in addressing fraudulent practices.\nSection 10(b) provides broad prohibitions against manipulative and deceptive practices in securities trading, while Rule 10b-5 offers specific regulatory mechanisms to enforce these prohibitions, addressing longstanding concerns about the ethical and economic implications of insider trading (Poser \u0026amp; Manne, 1967). Recent amendments to Rule 10b5-1 have strengthened these mechanisms by addressing gaps in insider trading enforcement (Monsour, Rosner, \u0026amp; Turner, 2022). These provisions serve as the cornerstone of insider trading enforcement, illustrating the evolution of regulatory priorities (Bainbridge, 2012). This framework reflects a regulatory choice between treating insider trading as a property rights issue versus a form of securities fraud (Bainbridge, 2001). By creating a flexible framework that could adapt to evolving financial schemes, these provisions became the primary legal instruments for prosecuting insider trading. The intentionally broad language allowed regulators to address a wide range of unethical behaviors that might not have been explicitly anticipated in earlier legislative efforts.\nThe Insider Trading Sanctions Act of 1984: Increasing Deterrence Recognizing the limitations of existing regulatory mechanisms, the Insider Trading Sanctions Act represented a significant escalation in legal consequences, further aligning insider trading enforcement with the fraud-based regulatory framework, and reflecting an ongoing effort to address the criticisms of insider trading's impact on market fairness (Bainbridge, 2001). Prior to this legislation, insider trading penalties were relatively modest and often viewed as a calculable business risk by sophisticated financial actors. The 1984 Act dramatically transformed this calculus by introducing civil penalties that could reach up to three times the profits gained or losses avoided through insider trading, marking a pivotal moment in the SEC's evolving approach to deterrence and demonstrating the importance of regulatory enforcement in achieving capital market stability, aligning the United States with global trends in strengthening enforcement measures against insider trading (Christensen, Hail, \u0026amp; Leuz, 2011; Hohenstein, 2006; Thompson, 2013).\nThis legislative approach reflected a fundamental shift in regulatory philosophy, moving from a purely punitive model to a more comprehensive deterrence strategy with expanded enforcement capabilities by introducing several critical innovations, aligning with a broader legal framework aimed at balancing regulatory deterrence and market fairness (Bainbridge, 2012). By making potential financial penalties substantially outweigh potential gains, the Act created a powerful economic disincentive for insider trading (Ayres \u0026amp; Bankman, 2001; Bhattacharya \u0026amp; Daouk, 2002). The legislation also expanded the SEC's ability to seek these penalties, effectively weaponizing the regulatory framework.\nThe Insider Trading and Securities Fraud Enforcement Act of 1988: Enhanced Accountability Building upon the foundation established in 1984, the 1988 Insider Trading and Securities Fraud Enforcement Act further refined and strengthened insider trading regulations by significantly expanding enforcement capabilities. Most notably, it established robust whistleblower protections and incentive mechanisms, recognizing that effective regulation often requires insider information and cooperation, as underscored by the critical role of gatekeepers in maintaining corporate governance integrity (Coffee, 2006).\nThe Act also increased potential criminal and civil penalties, creating a more comprehensive deterrence framework that by explicitly protecting and incentivizing individuals who could provide crucial information about insider trading schemes recognized the complex and networked nature of financial misconduct (Coffee, 2006). This approach recognized that combating sophisticated financial crimes requires a nuanced, multi-layered strategy.\nSarbanes-Oxley Act of 2002: Corporate Governance Revolution In response to major corporate scandals like Enron and WorldCom, the Sarbanes-Oxley Act represented a comprehensive reimagining of corporate financial oversight, addressing the ongoing tension between corporate promises and actual governance practices (Macey, 2010). While not exclusively focused on insider trading, the legislation significantly enhanced corporate transparency and executive accountability mechanisms that indirectly combated insider trading.\nKey provisions included mandatory certification of financial reports by CEOs and CFOs, enhanced disclosure requirements, and more stringent penalties for corporate fraud, reflecting an increased reliance on gatekeepers to ensure corporate accountability (Coffee, 2006). By creating a culture of increased transparency and personal accountability, Sarbanes-Oxley addressed insider trading through systemic cultural and structural reforms rather than solely through punitive measures (Ayres \u0026amp; Bankman, 2001).\nDodd-Frank Wall Street Reform and Consumer Protection Act of 2010 The most recent significant legislative intervention, the Dodd-Frank Act, further expanded regulatory capabilities in response to the 2008 financial crisis, emphasizing the need for strong implementation and enforcement to realize the full benefits of securities regulation (Christensen, Hail, \u0026amp; Leuz, 2011). This legislation introduced unprecedented whistleblower rewards, allowing individuals to receive substantial financial compensation for providing actionable information about securities law violations, an approach that was complemented by recent updates to Rule 10b5-1 designed to enhance transparency in trading plans and reflects the broader trends in securities regulation aimed at incentivizing compliance (Monsour, Rosner, \u0026amp; Turner, 2022; Zingales, 2009).\nBy offering rewards of up to 30% of monetary sanctions exceeding $1 million, the Act created powerful economic incentives for exposing insider trading and other financial misconduct. This approach recognized that effective regulation requires not just punishment, but active participation from market participants in maintaining systemic integrity (Zingales, 2009).\nInsider Trading Prosecutions The Martha Stewart Case: A High-Profile Corporate Governance Controversy The SEC v. Martha Stewart case, decided in 2003, represents a watershed moment in insider trading jurisprudence that extended far beyond traditional corporate contexts. Stewart was accused of insider trading related to ImClone Systems stock, involving a suspicious sale of shares based on non-public information received from her broker.\nThe legal proceedings centered on Rule 10b-5 of the Securities Exchange Act of 1934, which prohibits fraudulent practices in securities trading. Specifically, Stewart was charged with securities fraud, obstruction of justice, and making false statements to federal investigators. While the insider trading charge was technically a civil matter, the case resulted in criminal prosecution that ultimately led to a five-month prison sentence and significant reputational damage.\nThe case's significance extended well beyond its immediate legal outcome. It demonstrated that insider trading laws apply universally, regardless of an individual's public profile or social status. The prosecution illustrated the SEC's commitment to pursuing insider trading across diverse contexts, sending a powerful message about accountability in financial markets.\nThe Raj Rajaratnam and Galleon Group Case: Systematic Institutional Fraud The Galleon Group case, prosecuted in 2011, represented one of the most sophisticated and extensive insider trading schemes in hedge fund history. Raj Rajaratnam, the founder of Galleon Group, was found to have orchestrated a complex network designed to systematically gather and exploit insider information across multiple corporations.\nThe prosecution relied primarily on the Insider Trading Sanctions Act of 1984 and the Insider Trading and Securities Fraud Enforcement Act of 1988, demonstrating how rigorous enforcement mechanisms enhance the effectiveness of regulatory frameworks (Christensen, Hail, \u0026amp; Leuz, 2011). Investigators demonstrated a systematic approach to gathering non-public information, including wiretapped phone conversations and testimony from corporate insiders who had been providing privileged information.\nThe legal outcome was unprecedented, reflecting the application of insider trading laws that have been shaped by decades of regulatory and judicial refinement (Bainbridge, 2012). Rajaratnam received an 11-year prison sentence, the longest ever imposed for insider trading at that time, and was ordered to pay $92.8 million in penalties. The case highlighted several critical aspects of modern insider trading, including how the legal framework evolved from a property rights perspective to an enforcement model grounded in securities fraud (Bainbridge, 2001).\nThe investigation revealed the sophisticated methods used to gather and exploit insider information in complex financial networks, emphasizing the need for substitute mechanisms to deter such behavior (Ayres \u0026amp; Bankman, 2001). It demonstrated how technological capabilities could be used both to commit and detect financial fraud. Moreover, the case underscored the government's willingness to pursue aggressive prosecution strategies in combating sophisticated financial crimes.\nSteve Cohen and SAC Capital Advisors: Institutional Systemic Challenges The investigation into SAC Capital Advisors in 2013 represented a landmark moment in addressing systemic insider trading within large financial institutions. Unlike individual prosecutions, this case exposed widespread cultural and institutional challenges in preventing insider trading, underscoring the tension between property rights arguments and fraud-based enforcement strategies (Bainbridge, 2001).\nThe legal proceedings leveraged multiple statutes, including the Insider Trading Sanctions Act and provisions of the Sarbanes-Oxley Act that increased corporate accountability, consistent with the comprehensive policy goals outlined in insider trading law and the gatekeeping role of corporate actors in preventing systemic failures (Bainbridge, 2012; Coffee, 2006). While Steve Cohen was not personally criminally charged, his firm faced extensive legal consequences. SAC Capital was forced to pay $1.8 billion in settlements and convert from a hedge fund to a family office.\nThe case\u0026rsquo;s implications were substantial, as it demonstrated that regulatory bodies were willing to pursue institutional-level accountability, potentially dismantling entire financial organizations found to have systemic ethical failures, suggesting the importance of substitutes to insider trading as preventive mechanisms (Ayres \u0026amp; Bankman, 2001). The prosecution suggested a shift from individual-focused enforcement to more comprehensive institutional oversight.\nBroader Implications and Evolving Legal Landscape These cases collectively illustrate the dynamic nature of insider trading prosecution. They demonstrate an evolving legal approach that balances technological capabilities, institutional accountability, and individual responsibility. Each prosecution has contributed to a more sophisticated understanding of financial misconduct, pushing regulatory boundaries and reinforcing market integrity principles.\nThe progression of these cases shows a clear trajectory: from individual prosecution to more complex, systemic approaches that examine institutional cultures and information networks, mirroring the evolving legal strategies described in insider trading policy literature (Bainbridge, 2012). As financial technologies continue to evolve, so too will the legal mechanisms designed to maintain fair and transparent markets.\nThe ongoing challenge remains creating regulatory frameworks flexible enough to address emerging technologies and sophisticated financial strategies while maintaining clear, enforceable standards of ethical conduct.\nAnalysis and Conclusion Insider trading remains a persistent and evolving challenge in financial markets, despite the implementation of robust legal frameworks (Thompson, 2013). The current regulatory mechanisms possess significant strengths but also demonstrate notable limitations, particularly in their implementation and enforcement, which are crucial for ensuring positive capital market effects (Christensen, Hail, \u0026amp; Leuz, 2011). The increasing sophistication of financial technologies and information networks continuously create new avenues for potential insider trading, such as shadow trading strategies, necessitating ongoing adaptation of regulatory approaches, a critical insight from global studies on enforcement (Ayres \u0026amp; Bankman, 2001; Bhattacharya \u0026amp; Daouk, 2002; Woody \u0026amp; Davidson, 2023).\nThe effectiveness of existing laws depends on a complex interplay of technological monitoring, legal enforcement, and corporate ethical standards, with the regulatory framework reflecting a historical choice to treat insider trading as a securities fraud issue rather than a property rights concern (Bainbridge, 2001; Benny, 2004; Jagolinzer, 2008). Technological innovations have made both the detection and perpetration of insider trading more sophisticated. Real-time information flows and advanced communication technologies create unprecedented challenges for traditional regulatory mechanisms, particularly in emerging areas like shadow trading (Woody \u0026amp; Davidson, 2023).\nCorporate culture and individual ethical standards emerge as critical factors in preventing insider trading, aligning with the insights from empirical investigations of agency theory in corporate environments, highlighting the complex dynamics of corporate governance and institutional accountability (Ayres \u0026amp; Bankman, 2001; Benny, 2004; Macey, 2010). Beyond legal penalties, creating an institutional environment that prioritizes transparency, accountability, and ethical behavior becomes crucial, underscoring the importance of gatekeepers in fostering ethical corporate governance (Coffee, 2006; Jagolinzer, 2008). This requires a multifaceted approach involving regulatory bodies, corporate leadership, and individual professionals with a deeper examination of institutional promises and actual implementation (Macey, 2010).\nFuture improvements in combating insider trading will likely require a holistic strategy, including addressing emerging trading practices like shadow trading that exploit regulatory gaps (Woody \u0026amp; Davidson, 2023). Enhanced real-time monitoring technologies, more aggressive enforcement mechanisms, continued professional education on ethical financial practices, and developing more nuanced legal definitions of material, non-public information will be essential, alongside strengthening the gatekeeping role of professionals in safeguarding market integrity; particularly in light of comparative studies on the effectiveness of regulatory approaches (Ayres \u0026amp; Bankman, 2001; Benny, 2004; Coffee, 2006).\nIn conclusion, while substantial progress has been made in addressing insider trading, the dynamic nature of financial markets demands continuous vigilance, guided by the foundational principles and policy considerations central to insider trading regulation (Bainbridge, 2012). Maintaining market integrity requires an ongoing commitment to technological innovation, robust legal frameworks, a fundamental dedication to ethical corporate governance, and an understanding of the gap between corporate governance promises and actual institutional behaviors; which is also supported by research on the global economic benefits of effective insider trading enforcement (Bhattacharya \u0026amp; Daouk, 2002; Christensen, Hail, \u0026amp; Leuz, 2011; Macey, 2010).\nReferences Ayres, I., \u0026amp; Bankman, J. (2001). Substitutes for insider trading. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.265408\nBainbridge, S. M. (2001). Insider Trading Regulation: The Path Dependent Choice between Property Rights and Securities Fraud. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.208272\nBainbridge, S. M. (2012, September 4). An Overview of Insider Trading Law and Policy: An introduction to the Insider Trading Research Handbook. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=2141457\nBeny, L. N. (2004). A comparative empirical investigation of agency and market theories of insider trading. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.193070\nBhattacharya, U., \u0026amp; Daouk, H. (2002). The world price of insider trading. The Journal of Finance, 57(1), 75\u0026ndash;108. https://doi.org/10.1111/1540-6261.00416\nChristensen, H., Hail, L., \u0026amp; Leuz, C. (2011). Capital-Market Effects of Securities Regulation: Prior conditions, implementation, and enforcement. https://doi.org/10.3386/w16737\nCoffee, J. C. (2006). Gatekeepers: The Professions and Corporate Governance. Oxford University Press, USA.\nHohenstein, K. (2006, November 1). Fair to all people: the SEC and the regulation of insider trading. Securities and Exchange Commission Historical Society. https://www.sechistorical.org/museum/galleries/it/\nJagolinzer, A. D. (2008). SEC Rule 10b5-1 and insiders\u0026rsquo; strategic trade. Management Science, 55(2), 224\u0026ndash;239. https://doi.org/10.1287/mnsc.1080.0928\nWoody, Davidson, K. M. (2023). Safe Harbors in the Shadows: Extending 10b5-1 Plans to Cover Shadow Trading, Mich. St. L. Rev. 1069.\nMacey, J. R. (2010). Corporate governance: Promises Kept, Promises Broken. Princeton University Press.\nMonsour, P., Rosner, I. N., \u0026amp; Turner, S. M. (2022, December 22). A closer look at the Rule 10b5-1 amendments adopted by the SEC. Holland \u0026amp; Knight. https://www.hklaw.com/en/insights/publications/2022/12/a-closer-look-at-the-rule-10b51-amendments-adopted-by-the-sec\nPoser, N. S., \u0026amp; Manne, H. G. (1967). Insider trading and the stock market. Virginia Law Review, 53(3), 753. https://doi.org/10.2307/1071677\nSmith, T., \u0026amp; Block, W. E. (2015). The Economics of Insider Trading: A Free Market perspective. Journal of Business Ethics, 139(1), 47\u0026ndash;53. https://doi.org/10.1007/s10551-015-2621-5\nThompson, J. H. (2013). A global comparison of insider trading regulations. International Journal of Accounting and Financial Reporting, 3(1), 1. https://doi.org/10.5296/ijafr.v3i1.3269\nVerstein, A. (2021). Mixed Motives insider trading. Iowa Law Review, 106(3). https://ilr.law.uiowa.edu/print/volume-106-issue-3/mixed-motives-insider-trading\nZingales, L. (2009). The future of securities regulation. SSRN Electronic Journal. https://doi.org/10.2139/ssrn.1319648\n","permalink":"https://zags.dev/papers/insider-trading/","summary":"This paper examines the legal frameworks, regulatory mechanisms, and significant cases that define insider trading regulations in the United States, highlighting the ongoing tension between corporate information access and fair market practices.","title":"Securities Regulation: Insider Trading"},{"content":"Abstract Cloud computing has become a cornerstone of modern organizational IT strategy, offering significant benefits in terms of scalability, cost efficiency, and innovation. However, the decision to adopt cloud computing is not without its challenges. This paper examines the strategic considerations and technical factors organizations must evaluate when contemplating cloud adoption. It provides a framework for decision-making that aligns cloud computing with organizational goals, emphasizing the importance of understanding both the potential benefits and the inherent risks associated with cloud technologies. The paper concludes with a set of guidelines to help organizations navigate the complexities of cloud adoption, ensuring that their cloud strategy is both effective and aligned with their long-term objectives.\nKeywords: cloud computing, organizational strategy, IT infrastructure, data management, security, compliance\nIntroduction In the landscape of information technology, organizations are confronted with the persistent challenge of maintaining pace with technological advancements. Cloud computing has emerged as a transformative technology with significant implications for organizational operations and strategy. Cloud computing, as defined by the National Institute of Standards and Technology (NIST), is a \u0026ldquo;model for enabling ubiquitous, convenient, on-demand network access to a shared pool of configurable computing resources (e.g., networks, servers, storage, applications, and services) that can be rapidly provisioned and released with minimal management effort or service provider interaction\u0026rdquo; (Mell \u0026amp; Grance, 2011). The ability to deploy and manage technological resources on-demand has the potential to greatly improve the efficiency of provisioning informational technology infrastructure. This can pose substantial impact to an organization in the form of cost savings and competitive advantage.\nThis paper aims to examine the considerations organizations must evaluate when contemplating cloud adoption. Certain organizational constraints such as legal requirements or industry standard practices may impact how the cloud is utilized within an organization (Kroenke \u0026amp; Boyle, 2022, p. 207). These constraints may affect information technology infrastructure complexity and risk management (Kroenke \u0026amp; Boyle, 2022, p. 232-233). It is noteworthy that refusing to use some form of cloud capability is no longer a viable option for most organizations. Consequently, being aware of what constraints may impact the adoption of the cloud is vital to ensure proper risk management (Kaplan, 2013). By reviewing a brief history of cloud computing before examining some primary and technical considerations an organization must navigate before adopting the cloud, the paper will conclude with a simple framework that organizations can use to evaluate the adoption of the cloud. The objective is to equip readers with an understanding of how to assess the alignment of cloud computing with organizational goals.\nHistorical Context and Evolution of Cloud Computing The concept of cloud computing, which involves the remote management of computational resources in off-premises data centers, has been in existence for over a decade, with most organizations beginning to utilize the cloud in 2008 (Kroenke \u0026amp; Boyle, 2022, p. 204). Subsequently, the cloud computing landscape has undergone significant evolution, with major entities such as Microsoft, Google, IBM, and Oracle entering the market and driving continuous innovation (Dignan 2021).\nThe evolution of cloud computing can be categorized into three primary service models (Kroenke \u0026amp; Boyle, 2022, p. 211):\nInfrastructure as a Service (IaaS): This model provides a traditional1 server like experience, though the usage of tools like virtualized computing resources.\nPlatform as a Service (PaaS): This approach offers a system with the necessary tooling to run an application, such as an operating system with a database pre-installed, or an operating system with the necessary libraries and software to host a webserver, where only the application itself needs to be configured.\nSoftware as a Service (SaaS): This model delivers ready-to-use software applications, such as an application to read emails, upload files, or create documents.\nEach of these models offers varying degrees of control, flexibility, and management, enabling organizations to select the most appropriate option based on their specific requirements and technical capabilities. The rapid growth of these services is evident in market forecasts. Gartner (2024) projects that worldwide end-user spending on public cloud services will increase by 20.4% in 2024, reaching a total of $675.4 billion, a significant rise from $561 billion in 2023.\nPrimary Considerations for Cloud Adoption Cost and Complexity Analysis While cloud computing often promises cost reductions, it is essential to conduct a comprehensive cost-benefit analysis prior to adoption. Cloud services operate on a pay-as-you-go model, which can potentially lead to significant savings by eliminating the need for upfront capital expenditure on hardware. However, organizations must be cognizant of hidden costs such as data transfer fees, storage expenses, and potential outlays related to cloud expertise acquisition. Armbrust et al. (2009) emphasize that while cloud computing can offer cost advantages, the actual cost savings are contingent upon the specific use case and implementation strategy.\nCase Study: General Electric\u0026rsquo;s $7B Cloud Failure To illustrate the potential pitfalls of cloud adoption without proper planning, consider the case of General Electric\u0026rsquo;s $7 billion cloud failure. Predix, a PaaS launched by GE Digital, aimed at supporting Industrial Internet of Things (IIoT) solutions. Predix\u0026rsquo;s goal was to provide a platform to support the end-to-end development and deployment of IIoT devices. The intent was supporting the process of building, running, and operating of IIoT solutions through an edge-to-cloud2 approach. While Predix had the potential for complete market dominance as a platform, a company culture that was not digital-first3 and over-ambitious goals resulted in a platform that predicted $15 billion of revenue by 2020, cost $7 billion in spending on development, and generated only $1 billion in revenue by its target date and a lost opportunity to capture the market (Pereira).\nData Migration and Management Data migration is a critical consideration in cloud adoption. Organizations must account for the time, effort, and potential downtime implications associated with transferring large volumes of data to the cloud. Furthermore, ongoing data management in the cloud environment necessitates careful planning to optimize costs and performance.\nData Transfer Cost Implications Cloud providers, such as AWS, often implement network ingress and egress fees, as well as storage tiers (Amazon S3 Storage Classes). For organizations dealing with substantial data volumes or frequent data transfers, these costs can accumulate rapidly. It is imperative to model these costs accurately and consider strategies such as data compression or batch processing to minimize expenses.\nCloud Expertise The adoption of cloud technology often necessitates specialized skills that may not be present in an organization\u0026rsquo;s existing IT personnel. This situation necessitates either investing in training for current staff or recruiting new employees with cloud expertise. The scarcity of cloud professionals in the labor market can render this a challenging and potentially costly proposition. Chaudhary (2023) explores the rising demand for trained cybersecurity and cloud security professionals, noting that there are approximately 3 million open job listings worldwide, indicating a significant skill gap in the industry.\nAligning Cloud Adoption with Organizational Needs The decision to adopt cloud-based solutions should be predicated on addressing specific organizational challenges (Lange 2024). For instance, if an entity experiences user dissatisfaction due to prolonged website loading times, the implementation of a cloud-hosted Content Delivery Network (CDN) may be warranted to enhance performance (Kroenke \u0026amp; Boyle, 2022, p. 213). Organizations that face potentially significant financial losses from computational resource failures should prioritize improving system resilience. Entities with multiple geographical locations that struggle to manage on-premises or colocation servers might benefit from migrating shared resources to cloud infrastructure. Furthermore, organizations subject to frequent traffic surges could leverage the elasticity of cloud resources to mitigate lost business opportunities resulting from overloaded servers (Kroenke \u0026amp; Boyle, 2022, p. 208).\nThese examples illustrate a crucial principle: technological solutions should be tailored to address extant organizational problems rather than hypothetical scenarios. For instance, a small-scale retail organization is less likely to require optimization of page load times by marginal percentages compared to a news organization disseminating time-sensitive information. Similarly, the necessity for achieving 99.999% uptime (colloquially referred to as \u0026quot;five nines\u0026quot;) may be more pertinent to financial institutions than to businesses in less time-critical sectors.\nTechnical Considerations When organizations consider migrating to cloud computing, a multitude of technical considerations must be addressed to ensure success. These considerations encompass aspects such as data security, compliance with regulatory requirements, and disaster recovery. Organizations must evaluate the potential risks associated with data breaches and ensure that appropriate security measures are implemented, including encryption and access controls. Compliance is particularly critical, as organizations must adhere to various regulations governing data protection and privacy, necessitating thorough assessments of a cloud provider's compliance certifications and practices. Additionally, robust disaster recovery plans are essential to mitigate potential data loss and ensure business continuity in the event of a security incident or service disruption. By carefully analyzing these technical factors, organizations can make informed decisions that align with their strategic objectives and enhance their overall operational efficiency in the cloud environment.\nSecurity Data security is of critical importance in cloud computing. While reputable cloud providers invest substantially in security measures, organizations retain responsibility for correctly configuring these security features and implementing appropriate access controls. The consequences of inadequate security measures can be severe, as exemplified by the 2019 Capital One incident. Due to a misconfiguration of Capital One\u0026rsquo;s cloud resources, an attacker was able to access the sensitive account information of the bank's clients (Stella 2019). This misconfiguration ended up costing Capital One a $190 million settlement with their clients (Avery 2022). With the global average cost of a data breach in 2024 being $4.88 million, organizational cybersecurity is paramount (IBM 2024).\nCase Study: Code Spaces The 2014 incident involving Code Spaces serves as a cautionary tale in the realm of cloud service security and disaster recovery. Following a distributed denial-of-service (DDoS) attack, the company\u0026rsquo;s AWS control panel was compromised, enabling an attacker to delete customer data and backup files (Ragan 2014). Despite Code Spaces' attempts to communicate with clients and restore services, the irreversible loss of critical data and the failure to implement adequate security protocols ultimately led to the company's bankruptcy and closure. This case underscores the necessity for robust security measures and effective disaster recovery plans in cloud computing environments, illustrating the profound impact that security breaches can have on business viability.\nCompliance Regulatory compliance is a critical factor, particularly for organizations in highly regulated industries. Regulations such as the General Data Protection Regulation (GDPR) for EU data protection, the Health Insurance Portability and Accountability Act (HIPAA) for US healthcare information, and the Payment Card Industry Data Security Standard (PCI DSS) must be carefully considered when planning cloud adoption. Some cloud providers offer specialized compliance-focused services, but ultimately, the responsibility for compliance rests with the organization.\nData Resilience and Disaster Recovery Cloud computing has the potential to significantly enhance an organization\u0026rsquo;s data resilience and disaster recovery capabilities. However, it is crucial to understand the specific offerings of cloud providers and implement appropriate strategies.\nThe 3-2-1 Backup Strategy A robust backup strategy, often referred to as the 3-2-1 rule, involves maintaining: 3 copies of data, on 2 different types of storage media, with 1 copy off-site (Rabinov 2021). Cloud storage can be an excellent fit for this strategy, but organizations must ensure they understand the resilience and geographic distribution of their cloud-stored data.\nCase Study: OVHcloud Data Center Incident The importance of geographic data distribution is exemplified by the 2021 fire at OVHcloud's Strasbourg data center. In this incident, an entire data center burned down, resulting in permanent data loss for numerous customers who relied on a single data center for storage and lacked proper backups (Humphries 2021). This event serves as a stark reminder of the necessity for multi-region data replication and comprehensive disaster recovery planning.\nDecision-making Framework for Cloud Adoption When considering cloud adoption, organizations should evaluate the following factors:\nCurrent IT Infrastructure Assessment: Evaluate the age, efficiency, and scalability of existing infrastructure.\nWorkload Characteristics Analysis: Determine if workloads are suitable for cloud migration (e.g., variable compute needs, requirement for global access).\nData Sensitivity and Compliance Requirements: Consider regulatory constraints and data protection needs.\nComprehensive Cost Analysis: Perform a detailed Total Cost of Ownership (TCO) analysis comparing on-premises versus cloud scenarios.\nSkill Gap Assessment: Evaluate the current team\u0026rsquo;s cloud capabilities and the cost of acquiring necessary skills.\nBusiness Continuity and Disaster Recovery Requirements: Assess how cloud services can enhance or complicate existing strategies.\nLong-term Business Strategy Alignment: Ensure cloud adoption aligns with broader organizational goals and future plans.\nConclusion Cloud computing presents both opportunities and challenges for organizations. While it offers the potential for enhanced scalability, cost-efficiency, and innovation, it also introduces new complexities in terms of management, security, and strategic planning. The decision to adopt cloud computing should not be taken lightly or driven merely by industry trends. Instead, it should be the result of careful consideration of an organization\u0026rsquo;s specific needs, capabilities, and long-term objectives.\nBy thoroughly evaluating the factors discussed in this paper, organizations can make informed decisions about the appropriateness, timing, and implementation strategy of cloud computing to drive their business forward. As cloud technologies continue to evolve, staying informed about emerging trends and continuously reassessing the organization's cloud strategy will be crucial for maintaining a competitive edge in an increasingly digital business landscape.\nIn conclusion, the impact of cloud computing on firm performance is nuanced and depends on various organizational factors. Therefore, a thoughtful, tailored approach to cloud adoption is essential for realizing its potential benefits and mitigating associated risks. Organizations that successfully navigate these considerations will be well-positioned to leverage cloud computing as a strategic asset in their ongoing digital transformation efforts.\nReferences \u0026ldquo;Amazon S3 Storage Classes.\u0026rdquo; Object Storage Classes \u0026ndash; Amazon S3, https://aws.amazon.com/s3/storage-classes/. Accessed 18 Oct. 2024.\nArmbrust, Michael, et al. Above the Clouds: A Berkeley View of Cloud Computing, 10 Feb. 2009, https://www2.eecs.berkeley.edu/Pubs/TechRpts/2009/EECS-2009-28.pdf.\nAvery, Dan. \u0026quot; Capital One $190 Million Data Breach Settlement: Today Is the Last Day to Claim Money.\u0026quot; CNET, 30 Sept. 2022, https://www.cnet.com/personal-finance/capital-one-190-million-data-breach-settlement-today-is-deadline-to-file-claim/.\nChaudhary, Ashwin. \u0026ldquo;The Booming Demand for Cybersecurity \u0026amp; Cloud Professionals.\u0026rdquo; CSA, 10 Mar. 2023, https://cloudsecurityalliance.org/blog/2023/10/03/the-booming-demand-for-cybersecurity-cloud-professionals.\n\u0026ldquo;Cost of a Data Breach Report 2024.\u0026rdquo; IBM, https://www.ibm.com/reports/data-breach. Accessed 17 Oct. 2024.\nDignan, Larry. \u0026ldquo;Top Cloud Providers: AWS, Microsoft Azure, and Google Cloud, Hybrid, SaaS Players.\u0026rdquo; ZDNET, 22 Dec. 2021, https://www.zdnet.com/article/the-top-cloud-providers-of-2021-aws-microsoft-azure-google-cloud-hybrid-saas/.\n\u0026ldquo;Gartner Forecasts Worldwide Public Cloud End-User Spending to Surpass $675 Billion in 2024.\u0026rdquo; Gartner, 20 May 2024, https://www.gartner.com/en/newsroom/press-releases/2024-05-20-gartner-forecasts-worldwide-public-cloud-end-user-spending-to-surpass-675-billion-in-2024.\nHumphries, Matthew. \u0026ldquo;OVHcloud Data Center Devastated by Fire, Entire Building Destroyed.\u0026rdquo; PCMAG, PCMag, 10 Mar. 2021, https://www.pcmag.com/news/ovhcloud-data-center-devastated-by-fire-entire-building-destroyed.\nKaplan, James, et al. \u0026ldquo;Protecting Information in the Cloud.\u0026rdquo; McKinsey \u0026amp; Company, McKinsey \u0026amp; Company, 1 Jan. 2013, https://www.mckinsey.com/capabilities/mckinsey-digital/our-insights/protecting-information-in-the-cloud.\nKroenke, David M., and Randall Boyle. Using MIS. 12th ed., Pearson, 2022.\nLange, Kayly. \u0026ldquo;Cloud Strategies: How To Build a Cloud Strategy for Success.\u0026rdquo; Splunk, 22 Apr. 2024, https://www.splunk.com/en_us/blog/learn/cloud-strategy.html.\nPereira, Steve. \u0026ldquo;How GE Burned $7B on Their Platform (and How to Avoid Doing the Same).\u0026rdquo; Platform Engineering, https://platformengineering.org/blog/how-general-electric-burned-7-billion-on-their-platform. Accessed 19 Oct. 2024.\nRabinov, Natasha. \u0026ldquo;What\u0026rsquo;s the Diff: 3-2-1 vs. 3-2-1-1-0 vs. 4-3-2.\u0026rdquo; Backblaze Blog | Cloud Storage \u0026amp; Cloud Backup, 21 July 2021, https://www.backblaze.com/blog/whats-the-diff-3-2-1-vs-3-2-1-1-0-vs-4-3-2/.\nRagan, Steve. \u0026ldquo;Code Spaces Forced to Close Its Doors after Security Incident.\u0026rdquo; CSO Online, 18 June 2014, https://www.csoonline.com/article/547518/disaster-recovery-code-spaces-forced-to-close-its-doors-after-security-incident.html.\nStella, Josh. \u0026ldquo;A Technical Analysis of the Capital One Cloud Misconfiguration Breach.\u0026rdquo; CSA, 8 Sept. 2019, https://cloudsecurityalliance.org/blog/2019/08/09/a-technical-analysis-of-the-capital-one-cloud-misconfiguration-breach.\nTraditionally, computation was done through the provisioning of \u0026lsquo;bare metal\u0026rsquo; servers - software run directly on the hardware\u0026rsquo;s operating system without the usage of virtualization\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nEdge-to-Cloud computing allows data processing to occur closer to where data is generated (at the \u0026quot;edge\u0026quot;) while leveraging the cloud for storage, analytics, and management.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\nA digital-first organization is one that prioritizes digital technology in all aspects of its operations and customer interactions, embedding digital solutions into the core of the business strategy.\u0026#160;\u0026#x21a9;\u0026#xfe0e;\n","permalink":"https://zags.dev/papers/strategic-cloud-adoption/","summary":"This paper explores the strategic considerations and technical factors organizations must evaluate when adopting cloud computing, providing a framework for informed decision-making.","title":"Strategic Cloud Adoption: A Guide for Organizational Implementation"},{"content":"I\u0026rsquo;ve been wanting to document some of the projects I\u0026rsquo;m working on, and a blog seems like a great way to do that. I also often run into problems, and a blog may help others solve similar issues (definitely not foreshadowing for later on in this post \u0026#x1f61c;).\nBut first - I need a blog!\nA Blog You Say? I\u0026rsquo;ve wanted to try out a static site generator since I first heard about them a few years ago (yes - I\u0026rsquo;m ashamed to say that this has been on my to-do list for that long \u0026#x1f62c;). Hugo is always highly recommended, and with the recent WordPress debacle, it was at the top of everyone\u0026rsquo;s recommendation list as an alternative. So - that\u0026rsquo;s what I went with.\nGetting started was easy enough, following the quick start guide on the Hugo website, https://gohugo.io/getting-started/quick-start/.\nInstalling Hugo I headed over to https://gohugo.io/installation/linux/. Lucky me - there was a native package available for Fedora (my current favorite distribution for development work - I may write a future blog post on why I\u0026rsquo;ve settled on Fedora for dev work, so stay tuned for that).\nUnfortunately the version packaged in Fedora is too old - which as another aside, I\u0026rsquo;ve been meaning to get into maintaining some packages, so maybe I could start by updating the package (another item on the to-do list \u0026#x1f605;)!\nSo - let\u0026rsquo;s grab the latest binary from Hugo\u0026rsquo;s GitHub release page https://github.com/gohugoio/hugo/releases/latest. I downloaded the latest version, extracted the tarball, moved the binary into my $PATH, and everything worked - well for the most part.\nStreamlining the Binary Download Since I figured I would be working with Hugo a lot moving forward, I wanted to add it to my toolbx Containerfile (another future post inbound on container based development workflow). Unfortunately, due to how Hugo has their releases setup there isn\u0026rsquo;t an easy way to grab just the binary from the release page.\nProblem #1: Hugo adds the version of the release to the filename.\nThis prevents the ability to grab the latest release using a constant path, such as https://github.com/gohugoio/hugo/releases/latest/download/hugo_extended-linux-amd64.tar.gz, since the hugo_extended_${version}_linux-amd64.tar.gz has to be included.\nAnnoying.\nSo - now, to create a little workaround to get the $version of the latest release.\nYou Named Them What?? A quick search to avoid duplicating work is always a smart move; and what do you know, just what I needed, a page on one liners to grab the version of the latest release One Liner to Download the Latest Release from Github Repo - Gist. The cleanest solution seemed to be using the GitHub API and good ol\u0026rsquo; jq.\n1 curl -s https://api.github.com/repos/gohugoio/hugo/releases/latest | jq -r \u0026#39;.tag_name\u0026#39; Problem #2: The GitHub tag_name differs from the filename version tag.\nWhile this gave me the necessary version, it came prepended with a v for the version - which isn\u0026rsquo;t prepended to the version in the Hugo release filename.\nArgh!\nNothing a little coreutils CLI magic using cut won\u0026rsquo;t solve. Let\u0026rsquo;s grab everything after the first character.\n1 curl -s https://api.github.com/repos/gohugoio/hugo/releases/latest | jq -r \u0026#39;.tag_name\u0026#39; | cut -c 2- Great - now we have the version to add to the filename path of the latest release. Nothing like a little detour to keep things interesting \u0026#x1f612;.\nNow we are finally able to grab the archive of the latest release.\nPacked with Potential\u0026hellip; Not Instead of just providing a binary we can save directly to our path with something like:\n1 wget -qO ~/.local/bin/hugo https://github.com/gohugoio/hugo/releases/latest/download/hugo_extended_${version:?}_linux-amd64.tar.gz \u0026#x1f50d; Wait a second - wget? Weren\u0026rsquo;t you just using curl?\nYes! I often default to wget because of its inclusion in busybox, but curl gets the job done too. Feel free to use whichever you prefer.\nHugo decided to create a compressed archive including a README and LICENSE along with the binary. I guess they want to make us work for it.\nProblem #3: Including unnecessary files along with the binary.\nYay \u0026#x1f615;.\nOk. Well this is nothing I haven\u0026rsquo;t dealt with before. Some more CLI magic and we should be on our way.\n1 wget -qO- https://github.com/gohugoio/hugo/releases/latest/download/hugo_extended_${version:?}_linux-amd64.tar.gz | tar xz hugo Pipe the stdout from wget into tar, extracting with x and decompressing with z, and only extracting the file named hugo from the archive.\nGreat - we FINALLY have a binary \u0026#x1f624;.\nGrabbing the Hugo Binary Now we can put it all together. We can get the latest version of the hugo binary and extract it to ~/.local/bin/.\n1 2 hugo_latest_version=$(wget -qO- https://api.github.com/repos/gohugoio/hugo/releases/latest | jq -r \u0026#39;.tag_name\u0026#39; | cut -c 2-) wget -qO- https://github.com/gohugoio/hugo/releases/latest/download/hugo_extended_${hugo_latest_version}_linux-amd64.tar.gz | tar xzv -C ~/.local/bin/ hugo \u0026#x1f5d2;\u0026#xfe0f; Note\nIf using the above snippet to build a container be sure to extract to /bin/, as ~/.local/bin/ won\u0026rsquo;t be available.\nPhew! - That was a lot of work to get a binary \u0026#x1f613;.\nBack on Track Now that all that executable nonsense is sorted out - I can get back to actually setting up this blog. I followed the rest of the quick start guide, then got to the part on picking a theme.\nHmm. What other themes are available at https://themes.gohugo.io/.\nUh-oh. That\u0026rsquo;s a lot of options.\nRather than pursuing the perfect theme, I decide to just get started with the first decent option I found.\nThe theme will likely evolve over time, and I may even try to write my own to get some webdev/front-end experience.\nBut hey look at that - I have a blog now!\nLessons Learned Problem Recap Hugo added the version of the release to the filename. The GitHub tag_name differs from the filename version tag. Unnecessary files included in an archive with the binary. Lesson When publishing releases:\nDon\u0026rsquo;t use unique filenames for each release. While this can be useful for users, it complicates automation efforts. If the filename is going to include an identifier, use an identifier that is easily machine obtainable. Don\u0026rsquo;t bundle unnecessary files with the release. What Now? Well - I wrote this post, that\u0026rsquo;s a start. I plan to write about my current homelab setup next, followed by documenting its upcoming migration.\nThat\u0026rsquo;s all I have for now, but I\u0026rsquo;m excited to share more posts in the future - so stay tuned, and I hope you enjoyed my first post!\n","permalink":"https://zags.dev/posts/a-wild-blog-appeared/","summary":"A technical blog has been on my to-do list for a while now.\nWith a home infrastructure rebuild on the way\nit seems like a good time to kick one off.","title":"A Wild Blog Appeared!"},{"content":"This report contains guidance in regards to choosing licensing models to base an information technology infrastructure on. Both open source and closed source technology can come with benefits and drawbacks, this report highlights some of the differences between the two options.\nDuring the planning phase of infrastructure choosing an adequate foundation is essential, as it is what supports everything built on top of it. Open versus closed source technology is one such decision that heavily impacts how the infrastructure foundation is built. An incorrect decision at the beginning of the building process often becomes extremely expensive, and sometimes becomes infeasible to fix without another complete rebuild.\nThis report concludes that while businesses of all sizes can benefit from open source, there are some caveats to consider during the infrastructure planning phase. The primary factor to be considered is the size of an organization\u0026rsquo;s information technology staff, as some open source projects may not offer support contracts. Support, while not unique to closed source, is often more available due to the proprietary nature of closed source technologies.\nExecutive Summary Purpose of Report This report analyzes open versus closed source licensing models and their effects on businesses. With modern business dependance on information technology, planning an information technology infrastructure that can meet the demands of a business is as important as ever. One major consideration when choosing technology to use in an informational technology infrastructure is whether a technology is open or closed source. This is often a high stake decision for a business as it determines how well infrastructure can scale with a business, and what kind of costs a business can expect over time.\nResearch Methods The research in this report consists of both primary observational research and secondary research. Primary research is based on personal observations of the market, and work experience dealing with all of the issues discussed. The secondary research in this report uses information found on the internet. Examples of the secondary research used includes; news articles of historical events, academic whitepapers, corporate blogs, and articles analyzing specific issues.\nRecommendations The recommendation resulting from this analysis is to use open source software wherever possible in your organization. The benefits of open source make it extremely difficult for closed source technology to compete. Open source often satisfies all business requirements, while reducing costs and increasing choice and flexibility when it comes to information technology infrastructure.\nOpen versus Closed Source Information Technology Infrastructure Introduction Modern business operations are highly dependent on information technology, with many businesses being unable to function without email or web services. Small to medium sized businesses are often not concerned with the specifics of the technologies that they are reliant on. As long as a particular piece of information technology is able to fulfill the requirements, the only other consideration is generally cost. Many businesses make the mistake of only considering the immediate costs of technology and fail to consider how information technology costs can compound due to faulty planning and lack of foresight.\nThe failure to consider the full scope of information technology choices can become a very costly mistake as businesses grow. Technology often becomes a deeply embedded part of business infrastructure and grows with the business. A small mistake made during the early phases of a business\u0026rsquo;s life cycle can quickly become extremely costly as a company attempts to scale their infrastructure to meet ever changing needs. A critical part of information technology infrastructure planning that often goes unconsidered is whether the technology being used is open or closed source.\nA fundamental requirement of open source technology is that the underlying specifics of the technology is accessible to the public, hence the source of the technology is ‘open’. Almost all technology is dependent on open source technology at some point. For example, without open source standards many technologies like the internet would be unable to function. Open source is important for technology as it allows for wide adoption and interoperability.\nConversely, closed source technology is generally considered any be anything that is proprietary, which often requires a fee to use or implement. Closed source, while lacking an industry standard definition, Kaspersky Lab (n.d.) provides the definition of closed source software as,\nLack of access to source code is a common, but not obligatory, feature of proprietary software. The code may be partially or wholly accessible in some cases, but its use without the author’s consent is unlawful. The owner of proprietary software can: Make the source code available to everyone but place legal restrictions on its modification and use; Make the source code available to a limited group of individuals: auditors, government officers, key customers, etc.; Permit the use of a program’s source code under a certain agreement, free of charge or for a fee. Software is proprietary by default under the laws of most countries. When creating a program, the author automatically receives all rights to its distribution, modification, and use, whereas waiving such rights requires documentation.\nWhile open and closed source is typically used in reference to software, such as the Kaspersky Lab (n.d.) definition, open and closed source can more broadly refer to hardware, software, implementations, and standards. Closed sourced hardware could be a computer part, such as a CPU or GPU, that cannot be built without licensing proprietary technology. An example of a closed source standard is H.264, a method of compressing video that is used almost everywhere, which has license terms that state, “to use and distribute H.264, browser and OS vendors, hardware manufacturers, and publishers who charge for content must pay significant royalties—with no guarantee the fees won’t increase in the future” (Google, 2011).\nA business with an understanding of how it is using various open and closed source technologies is an essential step in designing a successful information technology infrastructure. Not having knowledge of the licensing models in use can open a business up to unexpected fees, lawsuits, and cancellation of services that it is reliant upon. This can lead to business interruption, and in extreme cases bankruptcy.\nOpen Source Open source is informally used to refer to the availability source material, there is however, a formal industry accepted definition set by the Open Source Initiative. The Open Source Definition, set by the Open Source Initiative (2023a), states that open source technology must meet the following criteria; allow free redistribution, have available source code, allow for derived works, ensure integrity of the author’s source code, no discrimination against persons or groups, no discrimination against fields of endeavor, distribution of license, license must not be specific to a product, license must not restrict other software, and that the license must be technology-neutral.\nTechnology that meets the definition of open source provides a unique benefit to its users. Free redistribution means technologies can remain free and available, removing concerns about unexpected fees or access being limited. Having the source code available allows for inspection, modification, and adaption of the technology for the users specific wants and needs. Derived works allow for technologies to be “forked”, the process of creating a copy of a project and turning it into a new separate derived project; this enables the development of a new version of the project based on the original technology if there is disagreement with a decision made in the original work. Integrity of the author’s source code ensures that different changes and forks can be identified as “official” and “unofficial” modifications. No discrimination against persons or groups and no discrimination against fields of endeavor prevents exclusion from usage based on a creator\u0026rsquo;s preferences, ensuring user diversity and removing restrictions on what kind of businesses can use the project. Distribution of license ensures that a company cannot add restrictions to the original license, such as requiring a non-disclosure agreement to use the technology, if it was not included in the original license. A license that is not specific to a product prevents a license from restricting the usage based on being a part of a larger technology, so everything can be used independently of other technology and licenses. A license that does not restrict other software ensures user choice and flexibility by preventing a license from excluding the usage of other technologies. Licenses that are technology-neutral ensures open source will not interfere with other technologies in use.\nWhy Choose Open Source? The only way to ensure true freedom of choice within a business\u0026rsquo;s information technology infrastructure is through the usage of open source. Your business\u0026rsquo;s infrastructure should be your infrastructure, not restricted by the whims of a company who may not have your best interests in mind. It is not possible to control and manage infrastructure while relying on the closed source decisions of a third party. The freedom of choice provided by open source allows technology to be adapted to the needs of your specific infrastructure. The nature of open source technology also means that you can affect change. This can be achieved by directly modifying an open source project to your needs or helping steer the direction of a project by participating in open discussion. Another common method of affecting change is financially supporting projects through donations and sponsorships, which can encourage development and sustain the technology. Other reasons to choose an open source infrastructure is for the many functional benefits, such as; security, affordability, transparency, perpetuity, interoperability, and flexibility.\nSecurity Having the source code available results in improved security, even though closed source companies try to say otherwise (Clarke et al., n.d.; Wheeler, 2015). A portion of this may have to do with the misaligned incentive of closed source companies to not report security issues, as no one can verify or check for vulnerabilities. This means the company has little incentive to make these issues public or fix them in a timely manner (Wheeler, 2015). The recent exploitation of the company SolarWinds closed source software Orion was called the largest cyberattack in history by Microsoft President Brad Smith (Cerulus, 2021). The notorious SolarWinds hack affected approximately 18,000 businesses, including about a dozen government agencies who stated the hack posed a grave risk to national security (Temple-Raston, 2021; Vaughan-Nichols, 2020).\nOpen source also means that technology can be publicly scrutinized by security researchers. This allows users of a technology to see security researchers safety evaluation, and any potential flaws found within a technology. An example of this is cryptography, where universities and government agencies sponsor the research of the latest encryption technology through competitions. Because of this public research, a backdoor put into an encryption standard, which allowed the government to effectively spy on anyone using the encryption method was identified by security researchers (Masnick, 2013). Security research is also incentivized through the use of ‘bug bounties’, a practice in which a company offers a reward to researchers who are able to identify security issues. This also means you can pay to have a technology put through a security audit, though larger projects will often use community funding to provide a public security audit (Wheeler, 2015).\nAffordability With software costs increasing 62% on average during the period between 2009 and 2019, free open source technology can become extremely appealing to businesses of all sizes (Guay, 2019). Even with the additional costs associated with the purchase of optional support contracts, open source almost always is more affordable. Red Hat (now owned by IBM), the company behind Red Hat Enterprise Linux, an open source operating system with first party commercial support, showed that when compared to Microsoft Windows Server, a closed source operating system, using Red Hat Enterprise Linux can reduce server infrastructure costs by 29% and reduce information technology staffing costs by 41% (Red Hat Enterprise Linux Team, 2013). Another example is comparing two popular database options, where the open source option PostgreSQL is free compared to the closed source option Oracle Database, which starts at $104,310 (Anderson, 2020). With cost being a primary consideration when choosing technologies, an infrastructure built on open source offers maximal savings.\nTransparency While some closed source technology may have source code available, project contributions and communications by the individuals working on the project are often redacted or omitted. Conversely, all open source code, contributions, and communications are publicly available. This gives a huge advantage to open source, as project direction and progress can be tracked, as opposed to relying on press releases and company communications. Additionally, there is often much more visibility into why certain decisions were made. This provides much more visibility into the technology being used, and can aid in decision making when choosing infrastructure technologies.\nPerpetuity Due to the license model of open source technology, there is never a concern of sudden cost increases or license changes. A recent example is how the video game development tool company Unity made a sudden and unexpected change to their fee structure. Unity decided to add a retroactive fee based on downloads of products developed using their tool. This fee model was so poorly thought out it was quickly met with a fury of responses from users of the product who pointed out this new model can bankrupt some businesses (Parrish, 2023). This is not uncommon, and while Unity backpedaled some of the changes after public outrage, it serves as an example of how closed source technology can change fees at will, while open source is free and stays free. This reduces business risk, and can provide peace of mind when integrating technologies into business infrastructure.\nInteroperability Open source technology will most often utilize other open source standards, this improves interoperability with other technology. Closed source technology however will often create proprietary protocols and standards that will only interoperate with other technologies that pay to implement the proprietary technology. The license model of open source allows for projects to be modified and adapted; this enables custom integrations with other products that may be in use in your infrastructure, further improving interoperability. Closed source will usually prohibit such modification, preventing any possibility of adapting other products or making necessary changes to allow interoperability; this is usually intended to prevent the migration away from the closed source product. Being able to easily interoperate with other technologies ensures the infrastructure remains flexible and able to adapt to changes.\nFlexibility All of the benefits of open source technologies culminate to give maximal flexibility. Having the freedom of choice when it comes to infrastructure means easy adaptability and scaling. Without the binds of closed source technology, infrastructure components become much more like interchangeable pieces as opposed to rigid immovable blocks. This enables changes to be made much more effectively and efficiently, saving both time and money. Open source licenses will often lack any restrictions on deployment sizes as well, giving the flexibility to scale far beyond what may be limited by the contractual limits of closed source (The Linux Foundation, 2018). Having the ability to be flexible when it comes to infrastructure allows for more choice. This can help prevent your business from being put in difficult situations with no options.\nAdvocates of Open Source Some of the largest companies in the world have an active open source culture, such as Amazon, Google, IBM, and Meta. The world\u0026rsquo;s fifth largest company, Amazon, with a 2023 market value of $1.3 Trillion, has been an advocate of open source since 2006, with over 1,200 Amazon open source projects, while contributing and supporting many more (Amazon (AMZN) - Market Capitalization, n.d.; Amazon, n.d.). Google, the world’s fourth largest company valued at $1.65 Trillion, is responsible for open source projects we are all familiar with, such as: Android, the mobile operating system; Chromium, the browser in which Google Chrome and many other web browsers are built upon; Go, a programming language used by many web applications; and Kubernetes, which is in common use across industry information technology infrastructure (Alphabet (Google) (GOOG) - Market Capitalization, n.d.; Google, n.d.). Meta, previously known as Facebook, currently has over 700 open source projects, with over 220,000 forks and 1.1 Million project followers (Meta, n.d.). IBM, another well known name in business, has been supporting open source for over twenty-five years, with 7,400 employees working on open source projects, and over 2900 open source projects available (IBM, n.d.).\nOpen Source Considerations Lack of Support While open source provides many desirable qualities, it does have some shortcomings. Notably, smaller projects will often lack commercial support. While community support is often available, this may be unsuitable for businesses requiring reliable and timely support. Usage of open source may require a highly trained internal information technology staff to overcome a lack of support options. The need for highly trained information technology professionals can sometimes be cost prohibitive depending on the size of the business. While there are companies specialized in informational technology contracting, this still may not be a viable option for smaller businesses.\nPotential User Interface Problems Another possible undesirable quality of open source is what some may consider unpolished user interfaces. This is usually caused by development focusing on function rather than appearance. A common flaw in the closed source model is that companies will create a nice interface to attract customers to generate revenue, but often lack function as more development goes towards appearance to attract customers. The removal of this profit motive often leads to a focus on function first and appearance as a secondary feature in open source technologies.\nAbandonware Risk One of the largest fears amongst open source consumers is a developer abandoning a project. Since work is generally entirely voluntary or donation based, developers may often have to prioritize other work to make money and put community open source projects secondary. This can lead to slow development times, and worse yet, total abandonment. While large open source projects often have corporate backing and support, some smaller projects often do not have this luxury. While derived works will allow the project to be maintained by someone else, there is sometimes no one willing to continue development.\nClosed Source Because of the proprietary nature of closed source, most companies are going to opt into releasing their technology as closed source. Closed source provides the easiest path to charge customers for usage and make profits. The lack of visibility also limits competition by preventing competitors from viewing how certain processes are implemented within a technology, this is desirable for most companies selling a technology. Closed source also often enables vendor lock-in, which is profitable for businesses looking to retain customers, as consumers cannot easily change the product they use.\nWhy Choose Closed Source? Niche Products Since most companies will release their technology as closed source, there may not be an open source alternative to meet requirements. While the largest enterprises can afford to develop their own open source alternative, this is usually not financially feasible for small to medium sized businesses. This leaves no choice for many businesses, and closed source may be the only option. Even if there is an open source alternative available, it may not always be the best choice. For specialized tasks with specific requirements a closed source product may be able to best satisfy those needs. Understanding infrastructure requirements can help determine whether a closed source solution is required.\nCommercial Support While many open source projects may not directly provide support contracts, open source allows for third parties to easily compete providing support to satisfy market demand. This is not always the case though, and support may not be available. If your business lacks the internal information technology staff to provide its own support, or prefers to have these services provided contracted out, choosing a closed source technology may better satisfy this requirement. Another reason to choose closed source technology for support is that in the case of first party support, they are often going to have the highest level of expertise when it comes to their own product. Most small businesses are likely going to fall into this category, as most businesses will not develop an information technology staff until later stages of growth.\nAddressing the Drawbacks of Closed Source Accounting for Future Migration When choosing closed source, understanding how a future migration may be handled is essential. Because many closed source products often result in vendor lock-in, this may result in your business committing to using a particular product for the life of your business. This can also lead to being forced into supporting technologies long after they should be retired. Legacy system support can provide significant risk for businesses. A part of legacy system support risk is becoming burdened with significant expenses as outdated technologies often require highly specialized technicians to maintain them. Another risk associated with legacy systems is cybersecurity hazards, where systems that are no longer supported will not receive security updates, leaving your business exposed to security vulnerabilities. Understanding how, and if, the business will be able to retire and migrate away from closed source technology in the informational technology infrastructure will help with planning, and may provide a reason to choose one product over another.\nUnderstanding the Business Model Having an awareness of how a closed source technology plans to make money can help prepare for future changes. If a product’s business strategy is to act as a loss leader to capture the market, expect some form of pricing model change or platform decay as value is attempted to be recovered. Vendor lock-in and aggressive license and support costs are common business strategies amongst closed source companies. Vendor lock-in prevents reasonably changing products, often trapping a consumer into using the product as fees increase. Aggressive license and support costs are often enabled by vendor lock-in. When a consumer is unable to switch to a competitor, there is no incentive to have competitive prices. Acquisitions are another strategy used to reduce competition and grow the reliance on a single platform. By acquiring a company or competitor, changes can be made, locking more consumers into a particular vendor.\nAs Platforms Decay, Let’s Put Users First (Doctorow 2023), highlights the many strategies companies use to ‘trap’ customers on platforms. The common closed source lifecycle starts with luring consumers with new, often free features, to build the platform. Once customers are using the product, companies use strategies like limiting interoperability and preventing an easy exit from the platform, trapping customers (Doctorow, 2023). Once customers are trapped, the company can harvest profits by limiting investment in improvement and reducing spending on maintenance of the product. Because this process of attract, trap, harvest, and decay is an effective profit strategy it is used by many companies. Understanding these models can help with implementing a response plan and planning for changes, improving infrastructure resilience during times of predictable change.\nLitigation Avoidance Many closed source companies are known to engage in highly aggressive auditing of usage to ensure licensing compliance. Having an understanding of the exact terms of your licensing agreements can help prevent expensive litigation as a company tries to charge extra fees and back pay for contractual violations. Employing the usage of licensing experts to audit your infrastructure and review contractual agreements can save on unexpected penalties.\nConclusion The many benefits of open source make it difficult to compete with, though there are a few situations where closed source technology may be a better choice for an informational technology infrastructure. There are also degrees of practicality for implementing open source technology. Software is an easy starting point, as open source software will often better integrate with other open source technology, such as open source hardware and standards. This is also the most common piece of technology that businesses interact with. Implementing technology that maximizes choice and flexibility early on in a businesses informational technology infrastructure journey will help minimize future problems.\nIf an open source hardware alternative is available and meets requirements it should be considered for businesses of all sizes. As a general rule, prioritizing open source hardware is an inefficient use of time by small to medium businesses. This is because of the budget and scale necessary to afford and justify current open source hardware, which is often specialty equipment. Only the largest of businesses typically have the scale and budget to justify custom open source hardware because of the specialty and expense associated with such technology.\nIn cases where closed source technology makes more sense, such as the need for a niche product or a specific support requirement, there are some strategies that can minimize risk to an organization. Strategies to minimize risk include creating migration plans and understanding how the technologies in use plan to make money. Migration plans can create a method of moving to an alternative product if the need arises, minimizing business disruption. Understanding the business models of technologies in use can help predict changes in costs associated with a particular technology that businesses are often unprepared for.\nIt is often difficult to go wrong with an open source information technology infrastructure. Some of the oldest technologies in use are open source, reducing the risk of becoming outdated. Many of the largest businesses in the world utilize open source technology. This is because of the flexibility and choice provided by open source technology. These benefits can be utilized by businesses of all sizes, and open source technology can scale as a business grows. This makes open source information technology a superior choice when compared to closed source.\nReferences Alphabet (Google) (GOOG) - Market capitalization. (n.d.). Retrieved September 27, 2023, from https://companiesmarketcap.com/alphabet-google/marketcap/\nAmazon. (n.d.). Open source – Amazon Web Services. Amazon Web Services, Inc. Retrieved September 27, 2023, from https://aws.amazon.com/opensource/\nAmazon (AMZN) - Market capitalization. (n.d.). Retrieved September 27, 2023, from https://companiesmarketcap.com/amazon/marketcap/\nAnderson, K. (2020, July 22). PostgreSQL vs Oracle: Difference in costs, ease of use, and functionality. DZone. https://dzone.com/articles/postgresql-vs-oracle-difference-in-costs-ease-of-u\nCerulus, L. (2021, February 15). SolarWinds is ‘Largest’ cyberattack ever, Microsoft president says. POLITICO. https://www.politico.eu/article/solarwinds-largest-cyberattack-ever-microsoft-president-brad-smith/\nClarke, R., Dorwin, D., \u0026amp; Nash, R. (n.d.). Is open source software more secure?: Homeland Security / Cyber Security. University of Washington. Retrieved September 27, 2023, from https://courses.cs.washington.edu/courses/csep590/05au/whitepaper_turnin/oss(10).pdf\nDoctorow, C. (2023, June 27). As platforms decay, let’s put users first. Electronic Frontier Foundation. https://www.eff.org/deeplinks/2023/04/platforms-decay-lets-put-users-first\nGoogle. (n.d.). Google open source. Google Open Source. Retrieved September 27, 2023, from https://opensource.google/\nGoogle. (2011, January 14). More about the Chrome HTML video codec change. Chromium Blog. https://blog.chromium.org/2011/01/more-about-chrome-html-video-codec.html\nGuay, M., [@maguay]. (2019, September 3). It’s not just you: Software has gotten far more expensive: The software inflation rate from 2009 to 2019. Capiche. https://capiche.com/e/software-inflation-rate\nIBM. (n.d.). Open source at IBM. IBM Developer. Retrieved September 27, 2023, from https://www.ibm.com/opensource/\nKaspersky Lab. (n.d.). Closed-source software (proprietary software). Encyclopedia by Kaspersky. Retrieved September 27, 2023, from https://encyclopedia.kaspersky.com/glossary/closed-source/\nLinode. (2023, March 9). Open source vs. Closed source: What’s the difference? Linode Guides \u0026amp; Tutorials. https://www.linode.com/docs/guides/open-source-vs-closed-source/\nMasnick, M. (2013, December 23). RSA’s “Denial” concerning $10 million from the NSA to promote broken crypto not really a denial at all. Techdirt. https://www.techdirt.com/2013/12/23/rsas-denial-concerning-10-million-nsa-to-promote-broken-crypto-not-really-denial-all/\nMeta. (n.d.). About | Meta open source. Retrieved September 27, 2023, from https://opensource.fb.com/about\nOpen Source Initiative. (2023a, February 22). The open source definition. https://opensource.org/osd/\nOpen Source Initiative. (2023b, April 14). The open source definition (annotated). https://opensource.org/definition-annotated/\nParrish, A. (2023, September 12). Unity has changed its pricing model, and game developers are pissed off. The Verge. https://www.theverge.com/2023/9/12/23870547/unit-price-change-game-development\nRed Hat Enterprise Linux Team. (2013, October 7). How Red Hat Enterprise Linux trims total cost of ownership (TCO) in comparison to Windows Server. Red Hat Blog. https://www.redhat.com/en/blog/how-red-hat-enterprise-linux-trims-total-cost-of-ownership-in-comparison-to-windows-server\nSatyabrata, J., [@Satyabrata_Jena]. (2023, March 24). Difference between open source software and closed source software. GeeksforGeeks. https://www.geeksforgeeks.org/difference-between-open-source-software-and-closed-source-software/\nTemple-Raston, D. (2021, April 16). A “Worst Nightmare” cyberattack: The untold story of the SolarWinds hack. NPR. https://www.npr.org/2021/04/16/985439655/a-worst-nightmare-cyberattack-the-untold-story-of-the-solarwinds-hack\nThe Linux Foundation. (2018, March 8). Why using open source software helps companies stay flexible and innovate - linux foundation. The Linux Foundation. https://www.linuxfoundation.org/blog/blog/why-using-open-source-software-helps-companies-stay-flexible-and-innovate\nVaughan-Nichols, S. J. (2020, December 18). SolarWinds, the world’s biggest security failure and open source’s better answer. The New Stack. https://thenewstack.io/solarwinds-the-worlds-biggest-security-failure-and-open-sources-better-answer/\nWheeler, D. A. (2015, July 18). Why open source software / free software (OSS/FS, FOSS, or FLOSS)? Look at the numbers! https://dwheeler.com/oss_fs_why.html\n","permalink":"https://zags.dev/papers/open-vs-closed-source/","summary":"A business analysis of open versus closed source technology in information technology infrastructure.","title":"Open versus Closed Source Information Technology Infrastructure"},{"content":"Abstract This paper examines fundamental ontological questions about the nature of reality and consciousness. Through analysis of competing philosophical frameworks, primarily monism and dualism, it explores how consciousness may be situated within reality and evaluates the strengths and limitations of each theoretical approach. The investigation extends to contemporary perspectives including eliminativism, panpsychism, and mysterianism to provide a comprehensive assessment of how we might understand the foundations of reality. This paper aims to demonstrate that while definitive answers remain elusive, the inquiry itself offers valuable insights into how we perceive and understand existence.\nKeywords: reality, consciousness, ontology, philosophy, monism, dualism\nFoundations of Reality Introduction One of the fundamental questions in both philosophy and science is \u0026ldquo;what is real?\u0026rdquo; The field of ontology, a branch of metaphysics, addresses this question by examining the nature of reality and existence. Through ontology, we can explore critical questions about the source of reality and how we might substantiate our claims about it.\nA central challenge in studying reality is determining the \u0026ldquo;location\u0026rdquo; of consciousness, what is commonly known as the mind-body problem. This problem investigates whether consciousness resides primarily in the mental or physical domain. While numerous theories attempt to resolve this dichotomy, philosophy faces an inherent limitation: metaphysical propositions often resist quantifiable verification. Given this constraint, a methodical approach involves evaluating competing theories on their merits to develop a coherent understanding based on cumulative philosophical insights. Through this process, we can better investigate whether reality and consciousness exist as physical entities, mental phenomena, or some combination thereof.\nTheoretical Frameworks: Monism and Dualism The Monist Perspective Monism holds that reality consists of a single fundamental substance or principle. In the context of consciousness, monists argue that mental and physical phenomena derive from the same underlying reality (Schaffer). This perspective eliminates the need to explain how distinct substances might interact, offering a more parsimonious explanation of consciousness.\nPhysicalism Physicalism represents a prominent monist theory asserting that mental states are ultimately physical in nature. According to this view, consciousness emerges from the neurobiological processes of the brain. The distinctive features of consciousness, including subjective experiences and intentionality, are explained as products of particular physical arrangements and processes (Stoljar).\nThe physicalist account maintains that modifying the underlying physical or biological structures would necessarily alter consciousness. This perspective aligns with scientific materialism and offers the advantage of methodological continuity with the natural sciences. However, physicalism struggles to explain how purely physical processes give rise to subjective experience \u0026mdash; what philosopher David Chalmers terms the \u0026ldquo;hard problem of consciousness.\u0026rdquo;\nNon-reductive Physicalism Non-reductive physicalism attempts to navigate this difficulty by maintaining that while mental states are physically instantiated, they cannot be reduced to or fully explained in physical terms. This position acknowledges the physical basis of consciousness while preserving the integrity and causal efficacy of mental phenomena. Mental properties, on this view, supervene on physical properties but maintain a degree of autonomy.\nThe Dualist Alternative In contrast to monism, dualism posits that reality comprises two fundamentally distinct substances: the physical and the mental (Robinson). This theoretical framework addresses the apparent irreducibility of consciousness to physical processes by placing it in a separate ontological category.\nCartesian Dualism Rene Descartes advanced an influential form of substance dualism. In his \u0026ldquo;Meditations\u0026rdquo; Descartes employs methodical doubt to question everything that can be doubted, including the reliability of sensory perception. He notes that while dreaming, one cannot distinguish the dream state from wakefulness (Descartes). This observation reveals the possibility that our perceived reality may exist independently of physical interaction.\nDescartes concludes that while he can doubt the existence of the physical world, he cannot doubt his own thinking, \u0026ldquo;cogito, ergo sum\u0026rdquo; (I think, therefore I am). This leads him to posit a fundamental distinction between the thinking mind (res cogitans) and the extended physical world (res extensa). This distinction forms the basis for his dualistic understanding of reality.\nPerceptual Idealism George Berkeley takes the dualist insight further by arguing that reality exists solely through perception. According to Berkeley, since we apprehend physical reality only through mental perception, and cannot access reality independently of consciousness, the mental domain must be primary. His famous dictum \u0026ldquo;esse est percipi\u0026rdquo; (to be is to be perceived) suggests that material objects exist only as perceptions in minds (Berkeley). This represents a form of idealism that privileges the mental substance over the physical.\nInteractionism and the Interface Problem A persistent challenge for dualist theories is explaining how the mental and physical substances interact, known as the interface problem. If consciousness exists in a non-physical realm, how does it causally influence the physical brain and body? Conversely, how do physical events affect the non-physical mind? This explanatory gap represents a significant theoretical hurdle for dualist accounts.\nContemporary Perspectives Functionalism and Multiple Realizability Daniel Dennett offers a functionalist account of consciousness that focuses on the functional role of mental states rather than their physical substrate. In his thought experiment \u0026ldquo;Where Am I?\u0026rdquo;, Dennett explores how consciousness might be replicated or transferred to different physical media, such as computers (Dennett).\nThis perspective suggests that consciousness depends on information processing patterns rather than specific physical compositions. However, Dennett\u0026rsquo;s example still requires some physical implementation for consciousness to exist, which some interpret as support for a sophisticated form of physicalism. The functionalist approach introduces the concept of multiple realizability, which is the idea that mental states can be instantiated in different physical systems provided they maintain the appropriate functional organization.\nEliminativism: Questioning the Concepts Eliminativism challenges the very framework of the mind-body problem by arguing that our folk psychological concepts of consciousness and mental states are fundamentally flawed. Proponents of this view contend that consciousness is not a coherent entity requiring explanation but rather a conceptual construct that will eventually be replaced by more precise neuroscientific descriptions (Tomasik).\nThis perspective distinguishes between observed reality (our subjective experience) and absolute reality (the objective world independent of observation). According to eliminativists, what we call consciousness is simply the brain\u0026rsquo;s interpretation of its own processes \u0026mdash; a useful fiction rather than an ontological reality requiring special explanation.\nPanpsychism: Consciousness All the Way Down Panpsychism proposes that consciousness or mind-like qualities are fundamental features of reality that exist throughout the physical world, not just in brains or biological systems (Skrbina). This theory addresses the emergence problem of how consciousness could arise from entirely non-conscious components by suggesting that consciousness doesn\u0026rsquo;t emerge at all but exists as a basic property of matter.\nHistorical precedents for panpsychist thinking appear across diverse cultural traditions, from the Asian concepts of \u0026ldquo;Qi\u0026rdquo; in Buddhism and \u0026ldquo;Kami\u0026rdquo; in Shinto to the Native American notion of \u0026ldquo;Great Spirit\u0026rdquo; and the African concept of \u0026ldquo;Mana\u0026rdquo; (Parkes). Within Western philosophy, variations of panpsychism appear in the works of Plato, Leibniz, William James, and more recently, Bertrand Russell.\nContemporary versions of panpsychism attempt to reconcile this ancient intuition with modern scientific understanding. For instance, integrated information theory (developed by Giulio Tononi) proposes that consciousness correlates with complex systems\u0026rsquo; capacity to integrate information, potentially extending consciousness in varying degrees throughout nature.\nMysterianism: The Limits of Understanding Mysterianism takes a distinctive epistemic stance by arguing that the mind-body problem exceeds human cognitive capabilities (McGinn). According to this view, the subjective nature of consciousness, what Thomas Nagel described as the \u0026ldquo;what it is like\u0026rdquo; quality of experience, makes it intrinsically resistant to objective analysis.\nColin McGinn argues that the explanatory gap between physical processes and subjective experience is unbridgeable given the structure of human cognition. The qualitative aspects of experience (qualia) cannot be captured in quantitative terms, making a comprehensive theory of consciousness perpetually elusive. This position does not deny that consciousness has a natural explanation but suggests that humans may be cognitively closed to discovering it, much as a mouse cannot comprehend quantum physics.\nImplications and Significance The diversity of theories concerning consciousness and reality raises an important question: why does this philosophical inquiry matter? Understanding how we conceptualize reality influences our approach to knowledge, ethics, and scientific inquiry. If reality itself cannot be definitively characterized, how should we regard the knowledge built upon this uncertain foundation?\nThe exploration of these metaphysical questions encourages intellectual humility and openness to alternative perspectives. It demonstrates that even our most basic assumptions about reality deserve critical examination. Moreover, these philosophical considerations have practical implications for fields ranging from artificial intelligence and cognitive science to medical ethics and law.\nConclusion The question of reality\u0026rsquo;s foundations remains one of philosophy\u0026rsquo;s most enduring and challenging problems. While monism offers theoretical elegance through its unified account of reality, it struggles to explain how physical processes generate subjective experience. Dualism acknowledges the apparent distinctiveness of consciousness but faces difficulties explaining the interaction between mental and physical domains. Alternative perspectives like eliminativism, panpsychism, and mysterianism offer valuable insights but come with their own conceptual challenges.\nThis philosophical landscape reveals that our understanding of reality and consciousness remains incomplete. Yet the very process of interrogating these questions enriches our conceptual framework and reminds us of the provisional nature of knowledge. In confronting the limitations of our understanding, we gain valuable perspective on the complexity and wonder of existence itself.\nAs scientific investigation continues to advance our understanding of the brain and cognitive processes, philosophical inquiry remains essential for interpreting these findings within a broader conceptual framework. The dialogue between empirical research and philosophical reflection offers the most promising path toward a richer comprehension of reality\u0026rsquo;s foundations.\nReferences Berkeley, George. \u0026ldquo;To Be Is to Be Perceived.\u0026rdquo; A Treatise Concerning the Principles of Human Knowledge, 1710.\nChalmers, David J. \u0026ldquo;Facing Up to the Problem of Consciousness.\u0026rdquo; Journal of Consciousness Studies, vol. 2, no. 3, 1995, pp. 200-219.\nDennett, Daniel C. \u0026ldquo;Where Am I?\u0026rdquo; The MIT Press, 1978, https://www.lehigh.edu/~mhb0/Dennett-WhereAmI.pdf/.\nDescartes, René, et al. \u0026ldquo;Meditations I and II.\u0026rdquo; The Philosophical Works of Descartes, Cambridge University Press, Cambridge, UK, 1967.\nMcGinn, Colin. \u0026ldquo;Can We Solve the Mind\u0026ndash;Body Problem?\u0026rdquo; Mind, vol. 98, no. 391, 1989, pp. 349–66. JSTOR, http://www.jstor.org/stable/2254848.\nNagel, Thomas. “What Is It Like to Be a Bat?” The Philosophical Review, vol. 83, no. 4, 1974, pp. 435–50. JSTOR, https://doi.org/10.2307/2183914.\nParkes, Graham. \u0026ldquo;The Awareness of Rocks.\u0026rdquo; Skrbina David, ed. Mind that Abides. Chapter 17, https://grahamparkes.net/core/elfinder/files/pdf/E8-Parkes-The_Awareness_of_Rock.pdf/.\nRobinson, Howard, \u0026ldquo;Dualism\u0026rdquo;, The Stanford Encyclopedia of Philosophy (Fall 2020 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/fall2020/entries/dualism/.\nSchaffer, Jonathan, \u0026ldquo;Monism\u0026rdquo;, The Stanford Encyclopedia of Philosophy (Winter 2018 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/win2018/entries/monism/.\nSkrbina, David. \u0026ldquo;Panpsychism.\u0026rdquo; Internet Encyclopedia of Philosophy, https://iep.utm.edu/panpsych/.\nStoljar, Daniel, \u0026ldquo;Physicalism\u0026rdquo;, The Stanford Encyclopedia of Philosophy (Summer 2022 Edition), Edward N. Zalta (ed.), https://plato.stanford.edu/archives/sum2022/entries/physicalism/.\nTomasik, Brian. \u0026ldquo;The Eliminativist Approach to Consciousness.\u0026rdquo; Center on Long-Term Risk, 15 June 2020, https://longtermrisk.org/the-eliminativist-approach-to-consciousness/.\nTononi, Giulio. \u0026ldquo;Integrated Information Theory of Consciousness: An Updated Account.\u0026rdquo; Archives Italiennes de Biologie, vol. 150, no. 4, 2012, pp. 293-329.\n","permalink":"https://zags.dev/papers/foundations-of-reality/","summary":"An exploration of the philosophical theories surrounding the nature of reality and consciousness, examining monism, dualism, and alternative perspectives.","title":"Foundations of Reality"},{"content":"Coming Soon! \u0026#x1f6a7; \u0026#x1f477; \u0026#x1f6e0;\u0026#xfe0f;\n","permalink":"https://zags.dev/about-me/","summary":"\u003cp\u003eComing Soon! \u0026#x1f6a7; \u0026#x1f477; \u0026#x1f6e0;\u0026#xfe0f;\u003c/p\u003e","title":"About Me"},{"content":"Email - sebastianyaghoubi@gmail.com\nSignal - Sebastian Yaghoubi\n","permalink":"https://zags.dev/contact/","summary":"\u003cp\u003e\u003cstrong\u003eEmail\u003c/strong\u003e - \u003ca href=\"mailto:sebastianyaghoubi@gmail.com\"\u003esebastianyaghoubi@gmail.com\u003c/a\u003e\u003c/p\u003e\n\u003cp\u003e\u003cstrong\u003eSignal\u003c/strong\u003e - \u003ca href=\"https://signal.me/#eu/-xalPfkSUeA2kJKGlvguZMkTlU44-Emk4vka2E-PHIt4DPO0AK5Bo5vJPhpBjean\"\u003eSebastian Yaghoubi\u003c/a\u003e\u003c/p\u003e","title":"Contact Me"}]