Hazelcast v3.7-EA 发布,一个数据分发和集群平台

jopen 8年前
   <p style="text-align: center;"><img alt="" src="https://simg.open-open.com/show/b77962d4f5222b8dce11df9534a4d8b8.png" /></p>    <p>Hazelcast是一个高度可扩展的数据分发和集群平台。 可用于实现分布式数据存储、数据缓存。特性包括:</p>    <ul>     <li>提供java.util.{Queue, Set, List, Map} 分布式实现。</li>     <li>提供java.util.concurrent.ExecutorService 分布式实现。</li>     <li>提供java.util.concurrency.locks.Lock 分布式实现。</li>     <li>提供分布式主题的发布/ 订阅消息传递</li>     <li>提供用于一对多关系的分布式MultiMap</li>     <li>提供用于安全集群的Socket 层加密</li>     <li>支持同步和异步持久化 </li>     <li>通过JCA 与J2EE 容器集成和事务支持</li>     <li>为Hibernate 提供二级缓存Provider</li>     <li>提供分布式监听器和事件</li>     <li>支持集群信息和成员节点事件</li>     <li>通过JMX 监控和管理集群</li>     <li>支持动态HTTP 会话集群</li>     <li>实现动态集群</li>     <li>支持动态扩展到几百个服务器</li>     <li>利用备份实现动态分割</li>     <li>支持动态故障恢复</li>     <li>超简单易用,只需要一个jar 包</li>     <li>超快,每秒达到成千上万操作</li>     <li>超小,不到2MB</li>     <li>超有效, CPU 和RAM 非常低耗</li>    </ul>    <p style="text-align: center;"><img alt="" src="https://simg.open-open.com/show/a70e5840ee87a09f3b0d298770b7af4b.jpg" /></p>    <p style="text-align: center;"><strong>Hazelcast拓扑</strong></p>    <h2>更新日志</h2>    <h3>新特性</h3>    <ul>     <li><strong>First Modularized Release:</strong> 3.7 is the first fully modularized version of Hazelcast. We have separate repos, maven modules and release cycles for many aspects of Hazelcast now. Each <a href="/misc/goto?guid=4958991574049593496" onclick="__gaTracker('send', 'event', 'outbound-article', 'http://hazelcast.org/clients-languages/', 'client/language');">client/language</a> and <a href="/misc/goto?guid=4958991574151179025" onclick="__gaTracker('send', 'event', 'outbound-article', 'http://hazelcast.org/plugins/', 'plugin');">plugin</a> is now a module. When you download a 3.7 distribution, it contains the latest released version. But we can release updates, new features and bug fixes much faster than the Hazelcast core. When we say in this blog we will release <em>something</em> parallel to 3.7, we mean we are releasing a module. And it speeds up development. And of course it is easier to contribute to as an open source contributor. A win-win all round.</li>     <li><strong>Custom eviction policies:</strong> In Hazelcast you could always set an eviction policy from one of LRU or LFU. But what if you want more flexibility to suit custom requirements of your app. Custom eviction policy exactly helps on that. We implemented a custom eviction both for our Map and JCache implementations. Here you can see an example of an odd-based evictor. It works with our O(1) probabilistic evictors. You simply provide a comparator and we choose the best eviction candidate.<strong>Fault-Tolerant ExecutorService:</strong> Imagine you send executables to hazelcast nodes and they take hours to complete. What if one the nodes crashes and you do not know whether the task completed or not? In 3.7, we introduce <code>DurableExecutorService</code>. It guarantees ‘execute at least once’ semantics. Its API is a narrowing of<code>IExecutorService</code>. Unlike <code>IExecutorService</code>, users will not be able to submit/execute tasks to selected member/members. (Note: This module has not been released with EA1. It will be available in EA2 in a few weeks.</li>     <li><strong>New Cloud Integrations:</strong> We are releasing the CloudFoundry and OpenShift plugins parallel to the 3.7 release. The Hazelcast members deployed to CloudFoundry and OpenShift will discover each other automatically. Also you will have an option to connect and use Hazelcast as a service inside CloudFoundry and OpenShift. You also have the option of using this with Docker – <a href="/misc/goto?guid=4958991574245362047" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://hub.docker.com/r/hazelcast/openshift/', 'https://hub.docker.com/r/hazelcast/openshift/');">https://hub.docker.com/r/hazelcast/openshift/</a>. See Rahul’s following blog to learn more about using the <a href="/misc/goto?guid=4958991574338258327">CloudFoundry Integration</a>.<br /> We also released our <a href="/misc/goto?guid=4958991574430418836" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/hazelcast/hazelcast-azure', 'Azure Cloud Discovery Plugin');">Azure Cloud Discovery Plugin</a> for running Hazelcast on Azure. Hazelcast will also be up in<a href="/misc/goto?guid=4958991574518478465" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://azure.microsoft.com/en-us/marketplace/?term=hazelcast', 'Microsoft Azure Marketplace');">Microsoft Azure Marketplace</a> before the end of June. Look out for the forthcoming blog on that.</li>     <li><strong>Apache Spark Connector:</strong> Parallel to the 3.7 release we are releasing our new plugin Hazelcast-Spark connector. It allows Hazelcast Maps and Caches to be used as shared RDD caches by Spark using the Spark RDD API as per the following cache example.<br /> Both Java and Scala Spark APIs are supported. See the module <a href="/misc/goto?guid=4958991574607690601" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/hazelcast/hazelcast-spark/', 'repo');">repo</a> for details.</li>     <li><strong>Reactive</strong> – We did the hard work of adding back pressure to Hazelcast over the 3.5 and 3.6 releases. Hazlecast internally is fully asynchronous. Now we are beginning to expose reactive methods. In 3.7 most of the IMAP async methods plus AsyncAtomicLong have been added. Vert.x-Hazlecast is a very popular combination. Vert.x 3.3.0-CR2 has the new reactive Hazelcast integration for much higher performance. The reactive methods return</li>     <li><strong>Hazelcast CLI:</strong> Also new in 3.7 is a command line interface (“CLI”) to manage the lifecycle of Hazelcast members. The CLI makes hazelcast more dev-ops friendly as well as making testing easier for developers. Here how to install and use CLI: <a href="/misc/goto?guid=4958991574701699375" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/hazelcast/hazelcast-cli/', 'https://github.com/hazelcast/hazelcast-cli/');">https://github.com/hazelcast/hazelcast-cli/</a></li>     <li><strong>WAN Replication Enhancements (Enterprise Only):</strong> With 3.7; we implemented the ability to resynchronize a remote WAN cluster. This was implemented for IMap (Cache implementation is left for 3.8). This is very useful when you initiate a WAN connection or need to reinitiate after maintenance to a remote cluster.</li>     <li><strong>WAN Replication via Solace (Enterprise Only):</strong> We also added WAN replication with Solace, a high performance enterprise grade messaging solution.</li>    </ul>    <h2>Hazelcast内部的显著改进</h2>    <ul>     <li> <p><strong>Improvements on partitioning system:</strong> Our community had detected the following issue: <em>During a migration process, there can happen a moment that data is kept by number of nodes which is less than the configured backup count even there is enough number of nodes in the cluster. If any node crashes at this unfortunate moment, we were losing data.</em> Although, some may claim this is an edge case scenario; still it was conflict with our guarantee that we give to our users. So we designed and implemented major improvements in our partitioning and migration system. You can find a detailed explanation of the solution here:<a href="/misc/goto?guid=4958991574793501765" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://hazelcast.atlassian.net/wiki/display/COM/Avoid+Data+Loss+on+Migration+-+Solution+Design', 'https://hazelcast.atlassian.net/wiki/display/COM/Avoid+Data+Loss+on+Migration+-+Solution+Design');">https://hazelcast.atlassian.net/wiki/display/COM/Avoid+Data+Loss+on+Migration+-+Solution+Design</a></p> </li>     <li> <p><strong>Graceful shutdown improvements:</strong> We need to ensure data safety while shutting down multiple nodes concurrently. But there was a counter example: <em>When a node shuts down, it checks if all 1st backups of its partitions are synced without checking if the backup node is also shutting down or not. There is a race situation here. If both owner and backup nodes shutdown at the same time, we lose data since owner is not aware that 1st backup is also shutting down.</em> See the following PR to get an idea about the solution:<a href="/misc/goto?guid=4958991574883301173" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/hazelcast/hazelcast/pull/7989', 'https://github.com/hazelcast/hazelcast/pull/7989');">https://github.com/hazelcast/hazelcast/pull/7989</a></p> </li>     <li> <p><strong>Improvement to the threading model:</strong> In 3.7, there is now at least 1 generic priority thread that will only process generic priority operations like member/client heartbeats. This means that under load the cluster remains more stable since these important operations get processed. See following PR for details of the problem and solution: <a href="/misc/goto?guid=4958991574980388133" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/hazelcast/hazelcast/pull/7857', 'https://github.com/hazelcast/hazelcast/pull/7857');">https://github.com/hazelcast/hazelcast/pull/7857</a></p> </li>     <li> <p><strong>Improvements to the invocation system:</strong> Invocation service is one of the parts that we were wanting to improve and simplify. Because it is complex, it was hard to fix bugs, even minor changes were prone to regressions. We simplified the invocation logic and fixed some ambiguities. Although it is a completely internal development, it has made Hazelcast more stable preventing many problems regarding invocation system. Relatedly various enhancements (e.g. moving IsExecutingOperation into its own thread) fixed several issues like the following: <a href="/misc/goto?guid=4958991575067252873" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/hazelcast/hazelcast/issues/6248', 'https://github.com/hazelcast/hazelcast/issues/6248');">https://github.com/hazelcast/hazelcast/issues/6248</a></p> </li>     <li> <p><strong>Performance improvement on map.putAll():</strong> We introduced grouping and batching of remote invocations but also reduced some internal litter for higher performance. Our efforts resulted in a performance gain up to 15% especially when the argument size is bigger. If you want to read some code, here it is:<a href="/misc/goto?guid=4958991575162507483" onclick="__gaTracker('send', 'event', 'outbound-article', 'https://github.com/hazelcast/hazelcast/pull/8023', 'https://github.com/hazelcast/hazelcast/pull/8023');">https://github.com/hazelcast/hazelcast/pull/8023</a></p> </li>     <li> <p><strong>Prevent blocking reads in transactions:</strong> To provide atomicity for transactions, we were blocking the reads on entries which are locked transactionally. This is not an optimized solution. We changed the architecture so that we block reads just before the <code>commit</code>.</p> </li>     <li> <p><strong>Improvements on Hot Restart and HD (Enterprise HD Only):</strong> We introduced batching of hot restart operations (when fsync is enabled) that will improve performance notably. Moreover, we optimized memory usage of Hot Restart by persisting values to disk, reducing metadata. In high density memory, we created an abstraction layer for safer memory access which also handled unaligned memory access to enable HD on Oracle Sparc CPUs commonly found in Solaris systems.</p> </li>     <li> <p><strong>.NET Client Enhancements:</strong> Besides working on new clients, our team is enhancing existing clients. We added predicate and SSL support for our .Net client.</p> </li>    </ul>    <h2>下载</h2>    <ul>     <li><a href="http://download.hazelcast.com/download.jsp?version=hazelcast-3.7-EA&p=195763550"><strong>ZIP</strong></a> </li>     <li><a href="/misc/goto?guid=4958991575349815427" rel="nofollow"><strong>Source code</strong> (zip)</a></li>     <li><a href="/misc/goto?guid=4958991575436183336" rel="nofollow"><strong>Source code</strong> (tar.gz)</a></li>    </ul>    <p> </p>    <p> </p>    <p>   </p>