<?xml version="1.0" encoding="UTF-8"?><rss version="2.0"
	xmlns:content="http://purl.org/rss/1.0/modules/content/"
	xmlns:wfw="http://wellformedweb.org/CommentAPI/"
	xmlns:dc="http://purl.org/dc/elements/1.1/"
	xmlns:atom="http://www.w3.org/2005/Atom"
	xmlns:sy="http://purl.org/rss/1.0/modules/syndication/"
	xmlns:slash="http://purl.org/rss/1.0/modules/slash/"
	
	xmlns:georss="http://www.georss.org/georss"
	xmlns:geo="http://www.w3.org/2003/01/geo/wgs84_pos#"
	>

<channel>
	<title>Microsofts &#8211; Spress</title>
	<atom:link href="https://en.spress.net/tag/microsofts/feed/" rel="self" type="application/rss+xml" />
	<link>https://en.spress.net</link>
	<description>Spress is a general newspaper in English which is updated 24 hours a day.</description>
	<lastBuildDate>Wed, 23 Jun 2021 23:30:09 +0000</lastBuildDate>
	<language>en-US</language>
	<sy:updatePeriod>
	hourly	</sy:updatePeriod>
	<sy:updateFrequency>
	1	</sy:updateFrequency>
	
<site xmlns="com-wordpress:feed-additions:1">191965906</site>	<item>
		<title>Lightning repair: Microsoft&#8217;s latest notice: the new win11 system will be announced soon!</title>
		<link>https://en.spress.net/lightning-repair-microsofts-latest-notice-the-new-win11-system-will-be-announced-soon/</link>
		
		<dc:creator><![CDATA[editor]]></dc:creator>
		<pubDate>Wed, 23 Jun 2021 23:30:09 +0000</pubDate>
				<category><![CDATA[World]]></category>
		<category><![CDATA[Announced]]></category>
		<category><![CDATA[latest]]></category>
		<category><![CDATA[lightning]]></category>
		<category><![CDATA[Microsofts]]></category>
		<category><![CDATA[Notice]]></category>
		<category><![CDATA[Repair]]></category>
		<category><![CDATA[System]]></category>
		<category><![CDATA[Win11]]></category>
		<guid isPermaLink="false">https://en.spress.net/lightning-repair-microsofts-latest-notice-the-new-win11-system-will-be-announced-soon/</guid>

					<description><![CDATA[Prior to this, Microsoft had announced that it would hold a &#8220;What&#8217;s next for Windows&#8221; event on June 24, at which time the brand new Windows 11 system and related application content would be announced. As a result, today (June 23), Windows official Twitter released a new preview of Windows 11. From the Windows official [&#8230;]]]></description>
										<content:encoded><![CDATA[<p><strong> Prior to this, Microsoft had announced that it would hold a &#8220;What&#8217;s next for Windows&#8221; event on June 24, at which time the brand new Windows 11 system and related application content would be announced. As a result, today (June 23), Windows official Twitter released a new preview of Windows 11.</strong></p>
<p><span id="more-27159"></span></p>
<p>From the Windows official push, it is found that the avatar and background image are replaced with blue elements, which can roughly be seen to be the same as the default wallpaper of Win11, which further verifies that the &#8220;next generation of Windows&#8221; is Win11. The brand-new Win11 gives people a brand-new UI visual experience, whether it is rounded corners or floating, or brand-new animation, start menu design, will bring users a refreshing feeling.</p>
<p>According to Microsoft, it is planning to &#8220;innovate and develop exciting sensor technology&#8221; for future versions of Windows. As part of Windows 11 or the upcoming Windows update, popular features such as &#8220;user presence detection&#8221;, gesture detection, and ambient light will be improved to &#8220;achieve a dazzling user experience on Windows.&#8221;</p>
<p>Some important Windows 11 changes include rounded corners in Start, Windows Search, menus, and other windows. Microsoft has also updated traditional functions such as the control panel and device manager, using rounded corners, and adding the shadow effect of the Fluent Design design language.</p>
<p>What kind of changes will be made to Windows 11? Welcome to continue to pay attention to the editor. We will continue to release the latest news about Windows 11</p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">27159</post-id>	</item>
		<item>
		<title>Dominates the list of multiple CV tasks, only two days of open source, Microsoft&#8217;s hierarchical ViT model received nearly 2k stars</title>
		<link>https://en.spress.net/dominates-the-list-of-multiple-cv-tasks-only-two-days-of-open-source-microsofts-hierarchical-vit-model-received-nearly-2k-stars/</link>
		
		<dc:creator><![CDATA[editor]]></dc:creator>
		<pubDate>Sun, 18 Apr 2021 08:58:06 +0000</pubDate>
				<category><![CDATA[Tech]]></category>
		<category><![CDATA[days]]></category>
		<category><![CDATA[dominates]]></category>
		<category><![CDATA[hierarchical]]></category>
		<category><![CDATA[List]]></category>
		<category><![CDATA[Microsofts]]></category>
		<category><![CDATA[Model]]></category>
		<category><![CDATA[multiple]]></category>
		<category><![CDATA[open]]></category>
		<category><![CDATA[received]]></category>
		<category><![CDATA[source]]></category>
		<category><![CDATA[stars]]></category>
		<category><![CDATA[tasks]]></category>
		<category><![CDATA[ViT]]></category>
		<guid isPermaLink="false">https://en.spress.net/dominates-the-list-of-multiple-cv-tasks-only-two-days-of-open-source-microsofts-hierarchical-vit-model-received-nearly-2k-stars/</guid>

					<description><![CDATA[Heart of the Machine Report Edit: Dimensions Microsoft Swin Transformer, which slaughtered major CV tasks, recently open sourced the code and pre-training models. Since Google proposed Transformer in June 2017, it has gradually become the mainstream model in the field of natural language processing. In the recent period, Transformer has even started its own cross-border [&#8230;]]]></description>
										<content:encoded><![CDATA[<p>Heart of the Machine Report</p>
<p> <strong> Edit: Dimensions</strong> Microsoft Swin Transformer, which slaughtered major CV tasks, recently open sourced the code and pre-training models. Since Google proposed Transformer in June 2017, it has gradually become the mainstream model in the field of natural language processing. In the recent period, Transformer has even started its own cross-border journey and began to show its talents in the field of computer vision. A number of new Transformer-based models have emerged, such as Google’s ViT for image classification and Fudan, Oxford, Tencent SETR and other institutions. As a result, &#8220;Is Transformer a omnipotent?&#8221; has once become a hot topic in the machine learning community. Not long ago, researchers from Microsoft Asia Research proposed a hierarchical visual Transformer calculated through shifted windows, which they called Swin Transformer. Compared with the previous ViT model, Swin Transformer has made the following two improvements: First, it introduces the hierarchical construction method commonly used in CNN to build a hierarchical Transformer; second, it introduces the idea of ​​locality, which is for windows that do not overlap. Perform self-attention calculations within the area. Link to the paper: https://arxiv.org/pdf/2103.14030.pdf First look at the overall workflow of Swin Transformer. Figure 3a shows the overall architecture of Swin Transformer, and Figure 3b shows two consecutive Swin Transformer blocks. <img fifu-featured="1" decoding="async" class="content-picture" src="https://inews.gtimg.com/newsapp_bt/0/13412877799/1000"> The highlight of this research is the use of moving windows to calculate the representation of hierarchical Transformers. By limiting the self-attention calculation to non-overlapping partial serial ports, it also allows cross-window connections. This hierarchical structure can be flexibly modeled on different scales and has the linear computational complexity of the image size. Figure 2 below shows the workflow of calculating self-attention using moving windows in the Swin Transformer architecture: <img decoding="async" class="content-picture" src="https://inews.gtimg.com/newsapp_bt/0/13412877800/1000"> The characteristics of the model itself enable it to achieve a very competitive performance in a series of visual tasks. Among them, an image classification accuracy rate of 86.4% was achieved on the ImageNet-1K data set, and 58.7% of the target detection box AP and 51.1% of the mask AP were achieved on the COCO test-dev data set. At present, on the COCO minival and COCO test-dev data sets, Swin-L (a variant of Swin Transformer) has achieved SOTA in both target detection and instance segmentation tasks. <img decoding="async" class="content-picture" src="https://inews.gtimg.com/newsapp_bt/0/13412877892/1000"> In addition, on the ADE20K val and ADE20K datasets, Swin-L also implements SOTA in the semantic segmentation task. <strong> Open source code and pre-trained model</strong> Not long after the Swin Transformer paper was published, Microsoft officially released the code and pre-training model on GitHub recently, covering image classification, target detection, and semantic segmentation tasks. Only two days after the launch, the project has received 1,900 stars. <img decoding="async" class="content-picture" src="https://inews.gtimg.com/newsapp_bt/0/13412877893/1000"> Project address: https://github.com/microsoft/Swin-Transformer First, the image classification task, the accuracy results of Swin-T, Swin-S, Swin-B and Swin-L variant models on ImageNet-1K and ImageNet-22K data sets are as follows: <img decoding="async" class="content-picture" src="https://inews.gtimg.com/newsapp_bt/0/13412877894/1000"> Secondly, the target detection task: The results of the variant models of Swin-T, Swin-S, Swin-B and Swin-L on the COCO target detection (2017 val) dataset are as follows: <img decoding="async" class="content-picture" src="https://inews.gtimg.com/newsapp_bt/0/13412877971/1000"> The final semantic segmentation task: The results of Swin-T, Swin-S, Swin-B and Swin-L variant models on the ADE20K semantic segmentation (val) dataset are as follows. Currently, Swin-L has achieved 53.50% of the SOTA verified mIoU score. <img decoding="async" class="content-picture" src="https://inews.gtimg.com/newsapp_bt/0/13412877972/1000"> <strong> Build new, see wisdom &#8211; 2021 Amazon Cloud Technology AI</strong> <strong> Online conference</strong> April 22, 14:00-18:00 Why do so many machine learning loads choose Amazon Cloud Technology? How to achieve large-scale machine learning and enterprise digital transformation? &#8220;Building New · Seeing Wisdom-2021 Amazon Cloud Technology AI Online Conference&#8221; is led by Alex Smola, vice president of global artificial intelligence technology and outstanding scientist of Amazon Cloud Technology, and Gu Fan, general manager of Amazon Cloud Technology Greater China Product Department, and more than 40 heavyweights Guests will give you an in-depth analysis of the innovation culture of Amazon cloud technology in the keynote speech and 6 major conferences, and reveal how AI/ML can help companies accelerate innovation. Session 1: Amazon Machine Learning Practice Revealed Session 2: Artificial Intelligence Empowers Digital Transformation of Enterprises Session 3: The Way to Realize Large-scale Machine Learning Session 4: AI services help the Internet to innovate rapidly Session 5: Open Source and Frontier Trends Sub-venue 6: Intelligent ecology of win-win cooperation <strong> Which topic are you more interested in in the 6 major conference venues?</strong></p>
]]></content:encoded>
					
		
		
		<post-id xmlns="com-wordpress:feed-additions:1">4247</post-id>	</item>
	</channel>
</rss>