Imagine an airplane flying one millimeter above the ground and circling the Earth once every 25 seconds while counting every blade of grass. Shrink all that down so that it fits in the palm of your hand, and you'd have something equivalent to a modern hard drive, an object that can likely hold more information than your local library. So how does it store so much information in such a small space? At the heart of every hard drive is a stack of high-speed spinning discs with a recording head flying over each surface. Each disc is coated with a film of microscopic magnetised metal grains, and your data doesn't live there in a form you can recognize. Instead, it is recorded as a magnetic pattern formed by groups of those tiny grains. In each group, also known as a bit, all of the grains have their magnetization's aligned in one of two possible states, which correspond to zeroes and ones. Data is written onto the disc by converting strings of bits into electrical current fed through an electromagnet. This magnet generates a field strong enough to change the direction of the metal grain's magnetization. Once this information is written onto the disc, the drive uses a magnetic reader to turn it back into a useful form, much like a phonograph needle translates a record's grooves into music. But how can you get so much information out of just zeroes and ones? Well, by putting lots of them together. For example, a letter is represented in one byte, or eight bits, and your average photo takes up several megabytes, each of which is 8 million bits. Because each bit must be written onto a physical area of the disc, we're always seeking to increase the disc's areal density, or how many bits can be squeezed into one square inch. The areal density of a modern hard drive is about 600 gigabits per square inch, 300 million times greater than that of IBM's first hard drive from 1957. This amazing advance in storage capacity wasn't just a matter of making everything smaller, but involved multiple innovations. A technique called the thin film lithography process allowed engineers to shrink the reader and writer. And despite its size, the reader became more sensitive by taking advantage of new discoveries in magnetic and quantum properties of matter. Bits could also be packed closer together thanks to mathematical algorithms that filter out noise from magnetic interference, and find the most likely bit sequences from each chunk of read-back signal. And thermal expansion control of the head, enabled by placing a heater under the magnetic writer, allowed it to fly less than five nanometers above the disc's surface, about the width of two strands of DNA. For the past several decades, the exponential growth in computer storage capacity and processing power has followed a pattern known as Moore's Law, which, in 1975, predicted that information density would double every two years. But at around 100 gigabits per square inch, shrinking the magnetic grains further or cramming them closer together posed a new risk called the superparamagnetic effect. When a magnetic grain volume is too small, its magnetization is easily disturbed by heat energy and can cause bits to switch unintentionally, leading to data loss. Scientists resolved this limitation in a remarkably simple way: by changing the direction of recording from longitudinal to perpendicular, allowing areal density to approach one terabit per square inch. Recently, the potential limit has been increased yet again through heat assisted magnetic recording. This uses an even more thermally stable recording medium, whose magnetic resistance is momentarily reduced by heating up a particular spot with a laser and allowing data to be written. And while those drives are currently in the prototype stage, scientists already have the next potential trick up their sleeves: bit-patterned media, where bit locations are arranged in separate, nano-sized structures, potentially allowing for areal densities of twenty terabits per square inch or more. So it's thanks to the combined efforts of generations of engineers, material scientists, and quantum physicists that this tool of incredible power and precision can spin in the palm of your hand.
想像一架飛機在 離地一毫米的高度飛行 每 25 秒繞行地球一周 同時細數地面上每一根小草 把這些縮小到你掌心大小 差不多即是現在的硬碟 一個比地方圖書館 可能保存更多資訊的物品 以如此小的空間 是如何儲存這麼多的資料呢 ? 每個硬碟的核心 是一疊高速旋轉的磁片 磁片的每一面都有個 飛越其上的記錄磁頭 每個磁片上都濺鍍一層 極微小的磁性金屬顆粒 資料是以一種你無法辨識的形態存在 事實上,資料是由那些成群的 細小顆粒所形成的磁化模式來記錄 每一個群組,也就是所謂的「位元」 所有的顆粒都有它們的磁化排列 以兩種可能狀態之一呈現 也就是 0 和 1 數據寫入磁片中 是藉由通過電磁鐵 將一連串的位元轉換為電流 這個電磁鐵產生一個強大的磁場 足以改變金屬顆粒的磁化方向 一旦資料被寫入磁片 磁碟機會用一個磁性的「讀頭」 將其轉換回可用的模式 就像留聲機的唱針 將唱片紋路轉為音樂 但是怎樣從僅僅是 0 和 1 就能得到這麼多資訊呢? 把它們湊在一起就可以了 例如,用「1 位元組」 或「8 位元」表示一個字母 你一般的照片是幾百萬位元組 一個「百萬位元組」是八百萬位元 因為每個位元 必須寫在磁片的實體位置上 所以我們總是致力於 增加磁片的儲存密度 或是在一平方英吋可以塞進多少位元 現代硬碟的儲存密度差不多 是每平方英寸 6000 億位元 比 1957 年 IBM 的 第一個硬碟大 3 億倍 這儲存容量上的大躍進 並不只是把所有東西越做越小 而是導入許多的創新 一種稱為「薄膜微影技術」的製程 讓工程師們能將讀寫裝置縮小 儘管體積變小 讀頭卻變得更加靈敏 這全賴物質的磁性與 量子特性的新發現而獲益 位元也被壓縮地更密, 且拜數學演算法之賜 能去除磁性干擾產生的雜訊 以及從每個回讀訊號 理出最有可能的位元排序 而磁頭的熱膨脹控制 是在磁性寫頭下方放置一個加熱器 讓磁頭能以小於 5 奈米 的高度懸飛於磁碟上方 差不多 2 串 DNA 的寬度 過去幾十年來 電腦的儲存容量及處理能力呈指數式成長 遵循 1975 年的「摩爾定律」模式的預測 亦即資訊密度每兩年將會倍增 但在每平方英吋 1000 億位元左右的密度 若繼續縮小磁性顆粒或將它們排得更緊密 會造成新的風險,稱為「超順磁性效應」 當磁性顆粒體積太小 它的磁化很容易受熱能干擾 使得位元在無意中轉換方向 造成資料遺失 科學家們以非常簡單的方法 解決了這個限制: 將記錄的方向由 「水平記錄」改成「垂直記錄」 讓儲存密度提升到 每平方英吋 1 兆位元 ( 1 TB ) 最近又透過「熱輔助磁性記錄(HAMR)」 再度擴增其潛能 這方法是使用一個 熱穩定性更好的記錄介質 在雷射加熱一個特定區域時 其磁阻會暫時減小 使資料得以寫入 這種硬碟目前還只是在原型階段 但科學家已經有下一個錦囊妙計了 圖樣化磁紀錄技術 ( BPMR ) 就是將數個位元排成 一個個奈米大小的圖樣結構 可能使儲存密度提升到 每平方英吋 20 兆位元 甚至更多 總之,歸功於世世代代的工程師、 材料科學家 和量子物理學家的共同努力 這既精密又能力超強的工具 能運轉於你的手掌之中