<p><strong><em>Long before silicon and 1's and 0's, philosophers mused about “automata” that moved of their own will—think 400 BCE mechanical pigeons built for Plato’s pals. Allegedly, those contraptions whispered the first promise of machines that could “decide” for themselves.</em></strong></p><p>In 1950, Alan Turing sidestepped semantic debates and asked “Can machines think?” In a paper "Computing Machinery and Intelligence" that framed the question of machine thought as a practical test rather than endless debate by proposing a practical test—now immortalized as the Turing Test. Turing proposed the “imitation game”—we now call it the Turing Test—to determine if a machine could convincingly imitate human responses, sidestepping metaphysical squabbles. Six years later, the Dartmouth Summer Research Project on AI convened Claude Shannon, John McCarthy, Marvin Minsky, and Nathaniel Rochester to map out “electronic brains”—a gathering dubbed “the Constitutional Convention of AI”. </p><p>They envisioned programs capable of problem‑solving, language translation, and symbolic reasoning, laying foundations for languages like LISP and pioneering work in neural nets</p><p>Fast‑forward to 1998: Larry Page and Sergey Brin, two PhD students at Stanford, launched BackRub—later renamed Google—from a dorm garage, aiming to organize the web’s chaos with a better ranking algorithm. What began as a search experiment exploded into a global operating system, turning “google” into a verb and creating new paradigms in information retrieval and advertising.</p><p>At its core, AI is just code plus data plus math—neural nets that mimic neurons, symbolic engines that mimic logic. You write algorithms, feed them gazillions of examples, tweak weights, and voilà—patterns emerge. Heck, you could roll your own tiny model on a laptop in a weekend—No stress. Or so you think.</p><p><strong><em>Complexities of Building a Neural Network</em></strong></p><p>Imagine constructing a neural network from the ground up. You'd start with a basic architecture, perhaps a simple perceptron, and manually implement forward and backward propagation. Then, you'd painstakingly adjust weights and biases, ensuring the model learns effectively. This process demands a deep understanding of linear algebra, calculus, and optimization techniques.</p><p>Now, consider the data. In the early days, datasets weren't as readily available. You'd have to curate your own, ensuring it's diverse and representative. This step is crucial, as the quality of your data directly impacts the model's performance.</p><p>And the challenges? They're manifold. From overfitting and underfitting to vanishing gradients and computational limitations, the hurdles are real. Each obstacle requires innovative solutions, often leading to the development of new algorithms or techniques.</p><p>Developers can spin up models locally or via cloud APIs, often in just a few lines of Python, democratizing access but also tempting us to treat AI as a plug‑and‑play black box. Whether you’re tuning transformer layers or writing a mini LISP interpreter, the principles remain: data representation, algorithmic logic, and iterative refinement.</p><p><em><br></em></p><p><strong><em>But here’s recurring AI doomsday message that keeps popping up "AI will take our jobs" and the twist that no one CTRL‑F in the docs: this renaissance has lulled us into intellectual slack. Developers now lean on black‑box APIs, trusting AI to “think” for them. Creativity? Phoned in. Independent inquiry? Bypassed by a library call. </em></strong><strong>We risk becoming steered, not steering</strong>.</p><p>Here’s the kicker: the very ease that makes AI powerful also lulls us into passivity. When autocomplete and code suggestions finish our thoughts, we risk outsourcing creativity and critical thinking to models we barely understand.</p><p>Developers become prompt engineers issuing commands to inscrutable oracles, rather than investigators formulating questions and hypotheses.</p><p>If we’re not careful, AI won’t replace us outright—it will atrophy our capacity to think independently, turning us into cogs in algorithmic workflows rather than architects of ideas. </p><p><strong><em></em></strong></p><p><strong><em>Here's my two cents; To break free from this trend, we need to fuse the discipline of traditional research with AI’s creative spark and the relentless iteration of building real projects. Dive into the classics—this is the blueprint, the real deal.</em></strong></p><p>Google never succeeded by clicking buttons alone—it was exhaustive experiments, crawling tens of billions of pages, tweaking PageRank, and peer‑reviewing results that turned a dorm project into an empire</p><p>I used to view AI as a numbing hack—an autopilot for code and prose. Then I paused. I realized AI is more like an accelerator: it can fuzz out low‑level drudgery so we can chase big ideas, design new experiences, and prototype faster. When you treat AI as a collaborator, you amplify your creative bandwidth rather than curtail it.</p><p><br></p><p>But here’s where I freeze the frame: will you let AI sharpen you or carve you into a block? Hold that thought—because I’m still asking it of myself. In the next post, we’ll decode best practices for teaming with AI, avoiding the creativity trap, and carving pathways to discovery. Pause. Reflect. And remember: we all have the same questions.</p>
At the end of each month, we give out cash prizes to 5 people with the best insights in the past month
as well as coupon points to 15 people who didn't make the top 5, but shared high-quality content.
The winners are NOT picked from the leaderboards/rankings, we choose winners based on the quality, originality
and insightfulness of their content.
Here are a few other things to know
1
Quality over Quantity — You stand a higher chance of winning by publishing a few really good insights across the entire month,
rather than a lot of low-quality, spammy posts.
2
Share original, authentic, and engaging content that clearly reflects your voice, thoughts, and opinions.
3
Avoid using AI to generate content—use it instead to correct grammar, improve flow, enhance structure, and boost clarity.
4
Explore audio content—high-quality audio insights can significantly boost your chances of standing out.
5
Use eye-catching cover images—if your content doesn't attract attention, it's less likely to be read or engaged with.
6
Share your content in your social circles to build engagement around it.
Contributor Rankings
The Contributor Rankings shows the Top 20 Contributors on TwoCents a monthly and all-time basis.
The all-time ranking is based on the Contributor Score, which is a measure of all the engagement and exposure a contributor's content receives.
The monthly score sums the score on all your insights in the past 30 days. The monthly and all-time scores are calcuated DIFFERENTLY.
This page also shows the top engagers on TwoCents — these are community members that have engaged the most with other user's content.
Contributor Score
Here is a list of metrics that are used to calcuate your contributor score, arranged from
the metric with the highest weighting, to the one with the lowest weighting.
4
Comments (excluding replies)
5
Upvotes
6
Views
1
Number of insights published
2
Subscriptions received
3
Tips received
Below is a list of badges on TwoCents and their designations.
Comments