waynerad Profile Banner
Wayne Radinsky Profile
Wayne Radinsky

@waynerad

Followers
348
Following
193
Statuses
10K

Software engineer, futurist

Denver, Colorado, USA
Joined April 2008
Don't wanna be here? Send us removal request.
@waynerad
Wayne Radinsky
1 hour
There's a city in China you've never heard of that has 36 million people -- the largest in the world. It's not Shanghai or Beijing, it's Chongqing... or so this YouTuber (Drew Binsky) says. Well, says Chongqing is number 43, right after... eh, do I really have to list out 42 cities? It would be much simpler if it was just a handful. Oh, well, let's do this. Chongqing is at number 43, at 10.9 million, right after: Changsha (11.0 million), Hyderabad (11.4 million), Tianjin (11.5 million), Paris (11.5 million), Lima (11.8 million), Wuhan (12.2 million), Rio de Janeiro (12.5 million), Chennai (12.6 million), Xi'an (12.8 million), Ho Chi Minh City (Saigon) (13.9 million), Hangzhou (13.9 million), Bengaluru (Bangalore) (14.2 million), Lahore (14.5 million), Johannesburg (14.6 million), Xiamen (14.9 million), London (14.9 million), Kinshasa (15.6 million), Istanbul (15.9 million), Tehran (16.5 million), Buenos Aires (16.7 million), Los Angeles (17.2 million), Chengdu (17.3 million), Osaka (17.7 million), Kolkata (Calcutta) (17.7 million), Moskva (Moscow) (19.1 million), Lagos (20.7 million), Karachi (20.9 million), Krung Thep (Bangkok) (21.2 million), Beijing (21.2 million), New York (22.0 million), São Paulo 22.1 million, Dhaka (22.5 million), Al-Qahirah (Cairo) (22.5 million), Seoul (25.1 million), Ciudad de México (Mexico City) (25.1 million), Mumbai (27.1 million), Manila (27.2 million), Jakarta (29.2 million), Delhi (34.6 million), Shanghai (40.8 million), Tokyo (41.0 million), and (drum roll please...) Guangzhou (70.1 million). If you've never heard of Guangzhou, you probably have heard of Shenzhen. Well, what happened is a bunch of cities in the Northern Pearl River Delta in China grew to the point where they basically all merged together into a single metropolitan area. These include Shenzhen, Dongguan, Foshan, Huizhou, Jiangmen, Zhongshan, and, yes, Guangzhou, which, for some reason, became the name for the whole agglomeration, rather than Shenzhen. It seems to me like the reason he thinks Chongqing has more people than Guangzhou is because Chongqing is *denser*. The subjective feeling of "population" probably correlates mostly with population density. According to the numbers, Los Angeles has more people, with 17.2 million people, but I've been to Los Angeles (it's the largest city I've been to), and LA is the definition of "suburban sprawl". It's a place where you can drive for 3.5 hours in one direction and see nothing but houses and shopping malls. I've never been to New York, the largest city in the US, or Chicago, the 3rd largest, but it doesn't seem like from videos that they go "3D" and stack people on top of each other to the degree Chongqing does. They say "5D" but I don't know where they're getting 4th and 5th dimensions -- it looks 3D to me. I found a list of cities by population density, but, weirdly, when I look at a bunch of those top places on Google Street View (Port-au-Prince, Giza, Manila, Mandaluyong, Malé, Dhaka, Bnei Brak, Kolkata, Kathmandu), the look less crowded than Chongqing in the video. Well, I suppose there's always the possibility the statistics on are wrong? And maybe Chongqing really is the world's biggest city, like the YouTuber insists? How are all these people counted, anyway? list of urban agglomerations: List of cities by population density. It goes by municipal boundaries, not the total agglomeration, like the numbers above. While we're on the subject of China, VPNs in China don't allow you to get access to Gmail or Google Maps or any other Google services or most "Western" services, according to Elina Bakunova aka "Eli From Russia", unlike Russia which blocks lots of US-based internet services but doesn't bother to stop people from getting out using VPNs. Apparently China has cracked down hard on the VPNs so it is very hard in China to get a VPN that will give you access to the outside. Also, she says Gmail is illegal in China. She was able to get access to information in her Gmail by contacting friends in Russia through VKontakte, a Russian social network like Facebook that is apparently not blocked in either Russia or China (and is also not blocked here -- so you can communicate with people in Russia and China with it from here -- if you don't mind your communications being monitored by the Russian government). #demographics #population #populationdensity #urbanization
1
0
0
@waynerad
Wayne Radinsky
1 hour
Find stuff in aerial images. GeoDeep is "a fast, easy to use, lightweight Python library for AI object detection and semantic segmentation in geospatial rasters (GeoTIFFs), with pre-built models included." See the example images of where it puts bounding boxes on cars, swimming pools, and tennis courts, and draws an outline that follows roads. It has a "detect" system that outputs class labels with confidence scores (class labels like "small vehicle" or "tennis court", confidence scores like 0.838), and a "segment" system that gives you outlines of roads and buildings. #solidstatelife #ai #computervision
0
0
0
@waynerad
Wayne Radinsky
1 day
Physical unclonable function (PUF) technology. This article is from last November but I only just saw it today, and I'm sharing it anyway because I only just now discovered this technology is a thing that exists. In cryptography, you can do something called a "challenge-response" where you generate some random input, and the device combines it with a secret key using some algorithm to generate some output, which you can check to see if it's correct and the device is authentic. This relies on both the challenger and the hardware device having the shared secret, but not any attackers. If an attacker should get their hands on the physical device, though, they can copy it. If the key is stored in ROM (read-only memory), it can simply be copied out of the memory by the attacker and then they can make unlimited copies of the device by assembling the same components and making ROM with the same key. What physical unclonable function (PUF) technology does is render this literally impossible. PUFs exploit random physical factors introduced during semiconductor manufacturing that are unpredictable and uncontrollable. As such, it is impossible to manufacture a copy, even by an attacker who has access to the same semiconductor manufacturing equipment as the original manufacturer. "Due to deep submicron manufacturing process variations, every transistor in an IC has slightly different physical properties. These variations lead to small but measurable differences in electronic properties, such as transistor threshold voltages and gain factor. Since these process variations are not fully controllable during manufacturing, these physical device properties cannot be copied or cloned." "By utilizing these inherent variations, PUFs are very valuable for use as a unique identifier for any given IC. They do this through circuitry within the IC that converts the tiny variations into a digital pattern of 0s and 1s, which is unique for that specific chip and is repeatable over time. This pattern is a 'silicon fingerprint,' comparable to its human biometric counterpart." The article is about a particular type of PUF called an SRAM PUF. Synopsys, a company that makes software used for designing chips, has a circuit design that you can add to the circuit design for a SRAM (combined before manufacturing at the "intellectual property" -- IP -- stage) to get it to generate a "root key" when the device is first started up. #solidstatelife #cybersecurity #cryptography
0
0
0
@waynerad
Wayne Radinsky
1 day
"Ex-Google, Apple engineers launch unconditionally open source Oumi AI platform that could help to build the next DeepSeek." "What neither DeepSeek nor Llama enables, however, is full unconditional access to all the model code, including weights as well as training data. Without all that information, developers can still work with the open model but they don't have all the necessary tools and insights to understand how it really works and more importantly how to build an entirely new model. That's a challenge that a new startup led by former Google and Apple AI veterans aims to solve." "Launching today, Oumi is backed by an alliance of 13 leading research universities including Princeton, Stanford, MIT, UC Berkeley, University of Oxford, University of Cambridge, University of Waterloo and Carnegie Mellon. Oumi's founders raised $10 million, a modest seed round they say meets their needs. While major players like OpenAI contemplate $500 billion investments in massive data centers through projects like Stargate, Oumi is taking a radically different approach. The platform provides researchers and developers with a complete toolkit for building, evaluating and deploying foundation models." The $10 million makes me wonder if this has a chance of working. But, let's continue. (Lots of quotes follow.) "Oumi is a fully open-source platform that streamlines the entire lifecycle of foundation models -- from data preparation and training to evaluation and deployment. Whether you're developing on a laptop, launching large scale experiments on a cluster, or deploying models in production, Oumi provides the tools and workflows you need." "With Oumi, you can: Train and fine-tune models from 10M to 405B parameters using state-of-the-art techniques (SFT, LoRA, QLoRA, DPO, and more), work with both text and multimodal models (Llama, DeepSeek, Qwen, Phi, and others), synthesize and curate training data with LLM judges, deploy models efficiently with popular inference engines (vLLM, SGLang), evaluate models comprehensively across standard benchmarks, run anywhere - from laptops to clusters to clouds (AWS, Azure, GCP, Lambda, and more), and integrate with both open models and commercial APIs (OpenAI, Anthropic, Vertex AI, Together, Parasail, ...). "All with one consistent API, production-grade reliability, and all the flexibility you need for research." "Here are some of the key features that make Oumi stand out:" "Zero Boilerplate: Get started in minutes with ready-to-use recipes for popular models and workflows. No need to write training loops or data pipelines," "Enterprise-Grade: Built and validated by teams training models at scale," "Research Ready: Perfect for ML research with easily reproducible experiments, and flexible interfaces for customizing each component," "Broad Model Support: Works with most popular model architectures - from tiny models to the largest ones, text-only to multimodal," "State-Of-The-Art (SOTA) Performance: Native support for distributed training techniques (FSDP, DDP) and optimized inference engines (vLLM, SGLang)," "Community First: 100% open source with an active community. No vendor lock-in, no strings attached." Start here: #solidstatelife #ai #opensourceai
0
0
3
@waynerad
Wayne Radinsky
1 day
"Google's updated, public AI ethics policy removes its promise that it won't use the technology to pursue applications for weapons and surveillance." #solidstatelife #ai #aisafety #aiethics
0
0
0
@waynerad
Wayne Radinsky
4 days
"DeepClaude: Harness the power of DeepSeek R1's reasoning and Claude's creativity and code generation capabilities with a unified API and chat interface." Weird. How does one combine two LLMs like this? "Why R1 + Claude?" "DeepSeek R1's CoT trace demonstrates deep reasoning to the point of an LLM experiencing 'metacognition' - correcting itself, thinking about edge cases, and performing quasi Monte Carlo Tree Search in natural language." "However, R1 lacks in code generation, creativity, and conversational skills. Claude 3.5 Sonnet excels in these areas, making it the perfect complement. DeepClaude combines both models to provide: R1's exceptional reasoning and problem-solving capabilities, Claude's superior code generation and creativity, fast streaming responses in a single API call, and complete control with your own API keys." #solidstatelife #genai #llms #llms
1
0
1
@waynerad
Wayne Radinsky
4 days
@tbc0 I haven't heard about a Chinese replacement for CUDA. Tell me more. Have a link?
1
0
0
@waynerad
Wayne Radinsky
5 days
LLPlayer is a media player for language learning, with AI-generated subtitles (powered by OpenAI Whisper), dual subtitles (two languages simultaneously), realtime-OCR and translation (powered by Google Translate and DeepL), and word lookup (click on any word in a subtitle). I'll try this when I get some time. If you beat me to it, let me know how it goes. LLPlayer is unrelated to LL Cool J (as far as I know). #solidstatelife #ai #genai #llms
0
0
0
@waynerad
Wayne Radinsky
5 days
Exocomets -- comets in other solar systems outside our own -- have been spotted around 74 nearby stars. "Exocomets are icy bodies at least one kilometer in size orbiting stars other than our Sun. While they are too small to observe directly, these bodies occasionally collide with one another, releasing copious amounts of dust and pebbles. The exocomets and the debris they shed tend to orbit stars in belts, akin to our solar system's Kuiper Belt, and those belts are within reach of modern-day telescopes." "The radio wavelengths these observatories detect come from the faint glow of warm dust within the disks. Moreover, by combining data from several dishes, the observatories can make out fine details in the disk structures." #discoveries #astronomy #exoplanets
0
0
2
@waynerad
Wayne Radinsky
5 days
Lei (of Lei's Real Talk YouTube channel) looks at what AI professionals and the Chinese media inside China are saying about DeepSeek. AI professionals are very skeptical. However the mainland Chinese media is in full hype mode. Videos of Chinese AI professionals expressing skepticism of the claims were removed from Chinese social media. She (Lei) goes on to speculate on claims regarding whether DeepSeek may have GPUs in violation of sanctions and whether the Chinese government (CCP) may have been involved. This person, Lei, is a member of the Falun Gong. The Falun Gong is extremely critical of the Chinese Communist Party (CCP). She talks about her Falun Gong membership in this video: For an outsider's perspective on Falun Gong (a cult), see: Lei made a follow-up video where she comments more on DeepSeek's GPUs and examines DeepSeek's complex ownership structure. She speculates the complex ownership structure may be to hide owners that they don't want to make public. The company's headquarters in Hangzhou could be just a shell company, with the real company in Beijing. She ends with a rumor that the Chinese are working on language processing units (LPUs), hoping to undercut Nvidia's market dominance by flooding the market with cheap AI hardware (and make the sanctions irrelevant in the process). #solidstatelife #ai #genai #llms #deepseek #chineseai
1
0
1
@waynerad
Wayne Radinsky
6 days
Do the models DeepSeek released undermine the case for export control policies on chips? Dario Amodei, CEO of Anthropic, which makes the Claude models, doesn't think so. If anything, he thinks they make export control policies more important than they were a week ago. "Export controls serve a vital purpose: keeping democratic nations at the forefront of AI development. To be clear, they're not a way to duck the competition between the US and China. In the end, AI companies in the US and other democracies must have better models than those in China if we want to prevail. But we shouldn't hand the Chinese Communist Party technological advantages when we don't have to." "Before I make my policy argument, I'm going to describe three basic dynamics of AI systems that it's crucial to understand:" The "three basic dynamics" are: Scaling laws ("all else equal, scaling up the training of AI systems leads to smoothly better results on a range of cognitive tasks, across the board"), shifting the curve ("if the innovation is a 2x 'compute multiplier' (CM), then it allows you to get 40% on a coding task for $5M instead of $10M; or 60% for $50M instead of $100M, etc."), and shifting the paradigm ("the idea of using reinforcement learning to train models to generate chains of thought has become a new focus of scaling"). "The three dynamics above can help us understand DeepSeek's recent releases. About a month ago, DeepSeek released a model called 'DeepSeek-V3' that was a pure pretrained model -- the first stage described in #3 above. Then last week, they released 'R1', which added a second stage." "DeepSeek-V3 was actually the real innovation and what should have made people take notice a month ago (we certainly did). As a pretrained model, it appears to come close to the performance of state of the art US models on some important tasks, while costing substantially less to train (although, we find that Claude 3.5 Sonnet in particular remains much better on some other key tasks, such as real-world coding). DeepSeek's team did this via some genuine and impressive innovations, mostly focused on engineering efficiency. There were particularly innovative improvements in the management of an aspect called the 'Key-Value cache', and in enabling a method called 'mixture of experts' to be pushed further than it had before." However... "DeepSeek does not 'do for $6M5 what cost US AI companies billions'. I can only speak for Anthropic, but Claude 3.5 Sonnet is a mid-sized model that cost a few $10M's to train." "Sonnet's training was conducted 9-12 months ago, and DeepSeek's model was trained in November/December, while Sonnet remains notably ahead in many internal and external evals. Thus, I think a fair statement is 'DeepSeek produced a model close to the performance of US models 7-10 months older, for a good deal less cost (but not anywhere near the ratios people have suggested)'." "If the historical trend of the cost curve decrease is ~4x per year, that means that in the ordinary course of business -- in the normal trends of historical cost decreases like those that happened in 2023 and 2024 -- we'd expect a model 3-4x cheaper than 3.5 Sonnet/GPT-4o around now. Since DeepSeek-V3 is worse than those US frontier models -- let's say by ~2x on the scaling curve, which I think is quite generous to DeepSeek-V3 -- that means it would be totally normal, totally 'on trend', if DeepSeek-V3 training cost ~8x less than the current US models developed a year ago." "Making AI that is smarter than almost all humans at almost all things will require millions of chips, tens of billions of dollars (at least), and is most likely to happen in 2026-2027. DeepSeek's releases don't change this, because they're roughly on the expected cost reduction curve that has always been factored into these calculations." #solidstatelife #ai #genai #llms #deepseek #aiscaling
1
0
2
@waynerad
Wayne Radinsky
6 days
RT @alexandr_wang: What does DeepSeek R1 & v3 mean for LLM data? Contrary to some lazy takes I’ve seen, DeepSeek R1 was trained on a shit…
0
322
0
@waynerad
Wayne Radinsky
11 days
Open-R1 is a new project that aims to replicate DeepSeek-R1 in a fully open source manner. If you're thinking, but wait, isn't DeepSeek-R1 already open source? Not exactly. What's open is the model parameters (also called the model weights), but the complete process by which those parameters were created has not been made public. DeepSeek-R1 was released with a detailed "technical report" that explained the key steps behind its creation. But was it enough information for others to replicate the process? That's what we're going to find out. DeepSeek did not release the complete source code and training data used to create R1. "Data collection: How were the reasoning-specific datasets curated?" "Model training: No training code was released by DeepSeek, so it is unknown which hyperparameters work best and how they differ across different model families and scales." "Scaling laws: What are the compute and data trade-offs in training reasoning models?" "DeepSeek-R1 is a reasoning model built on the foundation of DeepSeek-V3. Like any good reasoning model, it starts with a strong base model, and DeepSeek-V3 is exactly that. This 671B Mixture of Experts (MoE) model performs on par with heavyweights like Sonnet 3.5 and GPT-4o. What's especially impressive is how cost-efficient it was to train -- just $5.5M -- thanks to architectural changes like Multi Token Prediction (MTP), Multi-Head Latent Attention (MLA) and a LOT (seriously, a lot) of hardware optimization." "DeepSeek also introduced two models: DeepSeek-R1-Zero and DeepSeek-R1, each with a distinct training approach. DeepSeek-R1-Zero skipped supervised fine-tuning altogether and relied entirely on reinforcement learning (RL), using Group Relative Policy Optimization (GRPO) to make the process more efficient. A simple reward system was used to guide the model, providing feedback based on the accuracy and structure of its answers." "DeepSeek-R1 started with a 'cold start' phase, fine-tuning on a small set of carefully crafted examples to improve clarity and readability. From there, it went through more RL and refinement steps, including rejecting low-quality outputs with both human preference based and verifiable reward, to create a model that not only reasons well but also produces polished and consistent answers." "Here's our plan of attack:" "Step 1: Replicate the R1-Distill models by distilling a high-quality reasoning dataset from DeepSeek-R1." "Step 2: Replicate the pure RL pipeline that DeepSeek used to create R1-Zero. This will involve curating new, large-scale datasets for math, reasoning, and code." "Step 3: Show we can go from base model -> SFT -> RL via multi-stage training." Note that the people involved on this project (Elie Bakouch, Leandro von Werra, and Lewis Tunstall) work for HuggingFace itself -- not some company that publishes models on HuggingFace. #solidstatelife #ai #genai #llms #codingai #china #deepseek
0
1
2
@waynerad
Wayne Radinsky
12 days
"Satellite firm bucks miniaturization trend, aims to build big for big rockets" "Over the last decade, much of the satellite industry has gone smaller. Similar to the trend in consumer electronics, in which more computing power and other capability can be packed into smaller devices, satellites have also gotten smaller and cheaper." "Smaller satellites typically sacrifice a lot of power, going from as much as 20 kilowatts down to 1 or 2 kW, Karan Kunjur, co-founder and chief executive of K2, said. They also often have a smaller aperture (such as a lens in a telescope), reducing the quality of observations. And they must make difficult trades between payload capacity and on-board propellant." "The reaction wheels that Honeywell Aerospace sells to Lockheed cost approximately $500,000 to $1 million apiece. K2 is now on its fourth iteration of an internally built reaction wheel and has driven the cost down to $35,000. Kunjur said about 80 percent of K2's satellite production is vertically integrated." The article goes on to describe the company's 'Mega Class' satellite bus -- a satellite bus is "the main structural component of a satellite, upon which payloads are hosted" -- which they claim will have similar capabilities as Lockheed's LM2100: "20 kW of power, 1,000 kg of payload capacity, and propulsion to move between orbits" but at a fraction of the cost. #space #satellites
0
0
2
@waynerad
Wayne Radinsky
13 days
"We, the undersigned, are employees of tech organizations and companies based in the United States. We are engineers, designers, business executives, and others whose jobs include managing or processing data about people. We are choosing to stand in solidarity with Muslim Americans, immigrants, and all people whose lives and livelihoods are threatened by the incoming administration's proposed data collection policies. We refuse to build a database of people based on their Constitutionally-protected religious beliefs." Signed by 2,842 people, from many companies, universities, organizations I've heard of, including Apple, Google, Facebook, Nvidia, Dell, Adobe, Yahoo, GitHub, GitLab, Intuit, Airbnb, Slack, IBM, Red Hat, MITRE, IEEE, MIT, GE, Oracle, Fastly, Docker, Uber, Lyft, Medium, Meetup, Stripe, Xilinx, Dropbox, VMWare, MongoDB, Akamai, Heroku, Autodesk, LinkedIn, Palantir, Synopsys, Accenture, Pivotal, Intel, Atlassian, Canonical, Instacart, Microsoft, Rackspace, Automattic, Cloudflare, Foursquare, Home Depot, Salesforce, Squarespace, Khan Academy, Walmart Labs, Charles Schwab, Northrop Grumman, Tableau Software, the Lifeboat Foundation, the Wikimedia Foundation, Booz Allen Hamilton, Cornell University, Harvard University, the Wharton School, Brandeis University, Stanford University, the University of Delhi, the University of Maryland, Oregon State University, the University of Washington, the University of Pennsylvania, National Institutes of Health, Illinois Institute of Technology, SUNY, Carnegie Mellon University (CMU), and others. "We are no longer publishing new signatures to the pledge on this website, but you can still support our movement." #solidstatelife #domesticpolitics
0
0
0
@waynerad
Wayne Radinsky
13 days
NootCode: Non-algorithmic LeetCode? "Becoming an exceptional software engineer requires more than just acing algorithm quizzes. It requires mastery of crucial skills including Computer Science Fundamentals, System Design, Scenario Analysis and more. Just passive learning is not enough. NootCode offers an online judging and coaching platform where you can master all these non-algorithmic skills through hands-on exercises just like practicing coding on LeetCode." #computerscience
0
0
4
@waynerad
Wayne Radinsky
13 days
"NYCerebro is a CLIP-powered search engine for NYC traffic cameras. It uses AI to find camera views matching your text descriptions." Built in two hours in a hackathon. "The 'magic' is that we use OpenAI's CLIP model to embed a semantic representation of each traffic camera's current image and then compare that with a the text vector of the user's search query. By indexing all of the camera images' vectors in a vector database we can find the 'most similar' images for a search query." "The craziest part of this hack? All of the frontend code was 100% written with Vercel's v0! With some detailed prompting, a bit of back and forth debugging, and the occasional escalation to OpenAI o1 (and copy/pasting back its reply to v0) we were able to create a novel app in just two hours." "For the backend, we used a Roboflow Workflow to calculate the CLIP embeddings and a Custom Python Block to save the results to our Supabase database (with pgvector as the vector store)." To give it a whirl, I punched in "construction and bridge". I got back a lot of cameras that all had bridges in them, but usually not any construction going on. CLIP stands for "Contrastive Language-Image Pre-training". The main idea is that you find images that have specific text in the description, and you also find images that *don't* have that specific text in the description, and you train the neural network on the "contrast" between the two. This technique, CLIP, was instrumental in making models that generate images from text, like OpenAI's Dall-E, Google's Imogen, Stable Diffusion, etc. #solidstatelife #ai #contrastivelearning
0
0
0
@waynerad
Wayne Radinsky
13 days
"In Germany, the number of ATMs blown up fell slightly in 2023 -- to 461 cases, according to the BKA. Solid explosives were almost always used, which caused great damage." Um. What? ATMs are getting blown up in Germany? And 461 ATMs blown up in a year (2023) means there has been a *decrease*? I never heard of this. This is from an article written on August 29, 2024. Way back in the late 90s, I went to a computer security talk where a security researcher told about a time where criminals put on hard hats and construction uniforms and used a construction machine to scoop up an ATM machine. His point was that ATMs were robbed in many ways other than breaking the encryption between the machine and the bank. People discovered tricks like putting in ATM cards with test codes on the magnetic strip and punching in special test codes that would get the machine to, for example, pop out 10 bills of the lowest denomination. But nobody ever broke the encryption. People focus a lot of effort on the encryption algorithms, but if the *rest* of the system isn't also secure, it doesn't matter how good the encryption algorithm is, people will still be able to break the system. I was under the impression the scooping the machine with the construction equipment was something that happened once. Maybe it was but apparently in Germany, simply blowing up ATMs with explosives is a regular thing? "Bank robbers often come at night and let it rip. As the latest figures from the Federal Criminal Police Office (BKA -- for "Bundeskriminalamtes") show, ATMs remain a popular target for bombing and robbery. In 2023, a total of 461 ATMs were blown up. After the record year of 2022 with 496 attempted and completed explosions, the number of cases fell by 7.1 percent. This is evident from the BKA's 2023 Federal Situation Report. One reason for the decline: banks and savings banks have been taking more active steps to combat the problem for some time now. They rely on secure devices or lock their branches at night. Not least because the explosions repeatedly endangered residents and neighboring houses." "As the BKA further explained, the amount of cash stolen by perpetrators was also somewhat lower last year. Compared to the previous year, it fell by 5.1 percent to 28.4 million euros. However, the sum remains 'at a comparably high level,' the authority said. The reason is the 'high proportion of cases' in which perpetrators obtained cash after a successful explosion. This was achieved in a total of 276 crimes." "According to official statistics, solid explosives with high detonation energy were used in 87 percent of all explosions. According to the BKA, pyrotechnic devices are used in particular, but military explosives and, in rare cases, homemade explosive devices are also increasingly used. This approach caused 'significant damage' and exposed emergency personnel and bystanders to 'great danger,' the BKA explained. In contrast, it is becoming increasingly rare for a gas or gas mixture to be introduced into the ATM and then ignited. This could also be due to the fact that the failure rate is significantly higher when using gas mixtures." "The suspects' propensity to violence remains high, according to the BKA. Last year, fatal traffic accidents were associated with "risky escape behavior" for the first time." "According to the BKA, the police managed to identify more suspects last year. The number rose by 57 percent to 201 compared to 2022. Almost 90 percent of them traveled from abroad to commit the crime. 160 of the suspects identified had their main residence in the Netherlands -- the vast majority. Many perpetrators belong to professionally organized gangs." So, blame the Netherlands. Alrighty then. One possibility for banks to improve the technical security of their ATMs "is systems that automatically color banknotes in the event of an explosion, thus making them unusable for the perpetrators." "In July, the federal government also decided to take action. In future, anyone who blows up an ATM will be punished with a prison sentence of at least two years." Coming from a US perspective, 2 years doesn't seem like much. "The draft law presented at that time also provides for changes to the Explosives Act." Whatever that is. ("Das Sprengstoffgesetz".) And the article stops there and doesn't say what the proposed changes might be. Link goes to an article in German. Translation by Google Translate. "The Explosives Act (Law on Explosive Substances) regulates the civil handling and trade of, as well as the import and transit of, explosive substances and explosive accessories in Germany. It is the most important legal source of German explosives law." #cybersecurity #physicalsecurity #atms #germany
0
0
1