Skip to main content

Posts

Showing posts from March 19, 2023
💗💗💖💘💕💓💝💞💞👭👯👯💙💚💛💜🌺🔯⚛⚛✺💐 Elaine Loveas InFo: ;sp[w9e0q38 Love

Machine Learning Workload and GPGPU NUMA Node Locality

 https://frankdenneman.nl/2020/01/30/machine-learning-workload-and-gpgpu-numa-node-locality/     In the previous article “ PCIe Device NUMA Node Locality ” I covered the physical connection between the processor and the PCIe device briefly touched upon machine learning workloads with regards to PCIe NUMA locality. This article zooms in on why it is important to consider PCIe NUMA locality. General-Purpose Computing on Graphics Processing Units New compute-intensive workloads take advantage of the new programming model called general-purpose computing on GPU (GPGPU). With GPGPU, the many cores integrated on modern GPUs are used to offload a vast number of (parallel) compute threads from the CPU. By adding another computational device with different characteristics, a heterogeneous compute architecture is born. GPUs are optimized for streaming sequential (or easily predictable) access patterns, while CPUs are designed for general access patterns and concurrency of thre

My unwavering opinion on current (auto-regressive) LLMs

  Yann LeCun • 2nd VP & Chief AI Scientist at Meta .1point3acres 2h •  2 hours ago My unwavering opinion on current (auto-regressive) LLMs 1. They are useful as writing aids. 2. They are "reactive" & don't plan nor reason. 3. They make stuff up or retrieve stuff approximately. 4. That can be mitigated but not fixed by human feedback. 5. Better systems will come. 6. Current LLMs should be used as writing aids, not much more. 7. Marrying them with tools such as search engines is highly non trivial. 8. There *will* be better systems that are factual, non toxic, and controllable. They just won't be auto-regressive LLMs. 9.I have been consistent with the above while defending Galactica as a scientific writing aid. 10. Warning folks that AR-LLMs make stuff up and should not be used to get factual advice. 11. Warning that only a small superficial portion of human knowledge can ever be captured by LLMs. 12. Being clear that better system will be app

[职场感言] 火爆ChatGPT背后别忘了AI的本质 - 老年从业者吃瓜想法 |只看干货

 ä˝œä¸şä¸€ä¸ŞĺˇĽä˝œćœ‰äş›ĺš´ĺ¤´çš„Dataäşş,最初接触AI概念的时候还没那么火,那时候模型都叫machine learning / statistic model, 后来DL的普及才开始叫人工智能,AI的。 这几天大家都被ChatGPT 刷屏了,恍惚ML/AI这块又迎来了第二春,理应业界继续狂欢,但是所谓物极必反,不可否认chatgbt是模型界的重大突破,但是还是想给非相关从业者或者经验尚浅的“AI"从业者提个醒,如果能看清楚这些AI模型背后本质的话,能避免踩很多坑。 . Waral dи, 我从开始到现在,都是很反感叫这些深度学习模型做人工智能的,重要的事情说三遍,这些模型 没有智能,没有智能,没有智能。 大家有接触过深度学习的都知道,虽然你可以说这叫弱人工智能,区别于强人工智能,但是实际体验大概率来说你还是很可能发现这些模型智障的时候。深度学习的反向传播算法算是模型的核心基础了,你可以说推导反向传播的一些图形箭头像什么人的神经元blablabla自己幻想这就是人的大脑思考的方式,实际上有点理智的人都知道,生物大脑的构造是何其复杂,简单的反复线性代数/微积分推导就能模拟大脑了? 给你足够时间你自己手算都能算出来的模型结果,这能跟人脑相提并论? 人学习一个新事物可能只需要看一眼,这些模型得要一大堆人工label去训练,其实要像科幻小说里面那种强人工智能模的话我们人类社会其实算是压根没起步,肯定是有重大的科技树没有点,真正的人工智能不可能就靠这么一些基本的数学原理就做出来了。 其实很多大厂和业界大佬都了解这一点,但是普通人却难以了解其中的奥秘,有一些AI应用看上去像模型真的能“思考”那样,例如人脸识别,一件换装,猜你喜欢,chatgpt这种chatbox,所以经过一定的商业包装,套上“人工智能”一词的确能更吸引用户使用。 那么其实这些应用跟真正我们说大脑的智能有没有关系呢? 其实关系真不大,DL 模型最重要的贡献分两步, 第一步就是将以前不太可能分析的数据源,例如图像、文字、语音,能拆解开来做embedding, 第二步就是一个更有效率,避免穷举,找出最能贴近真实label的结果,其中发展的技巧太多了,但是本质就跟统计学上找出最常见贴近历史数据的排列组合出来给你。 例如NLP模型,其实就是根据历史大数据,先把文章语句全都变成可以分析的数据,