Uncategorized

可以将笔记本变成触摸屏的AirBar(近红外体感技术)

这是又一种类似LeapMotion手指识别技术的产品,同样他使用近红外线技术(技术文档)。和之前介绍过的Touch+的idea几乎一样,只不过Touch+是把传感器放在笔记本屏幕上方,检测键盘上的手。而Airbar是放在屏幕下方,把屏幕变成触摸屏,并且他简化了使用场景,就是为了实现触摸而存在,因此如鼠标一样即插即用。

它到底是啥,看看视频吧:
https://v.qq.com/iframe/player.html?vid=k0382rvj88i&tiny=0&auto=0

再来看看一些图片:

他们的官网 http://www.air.bar/

有购买兴趣的直接在亚马逊中国购买即可(当然万能的淘宝也可以),五百多元也不贵。

看不到链接请点击阅读原文把。

KinectUncategorized

微软开源了RoomAlive工具包

消息来源csdn,看起来很复杂适合研究院和大公司弄弄~

RoomAlive 利用了 Kinect 与投影仪,将虚拟世界投射到房间的任何地方,并且可以与用户进行互动,该工具包支持创建动态投影映射体验。微软研究所已经使用该工具包好几年,其中用于各式各样的互动投影映射项目,诸如 RoomAliveIllumiRoomManoAManoBeamatron 以及 Room2Room

该工具包包含两个独立的项目:

  • ProCamCalibration——这个 C# 项目被用来校准室内的多个投影仪和 Kinect 摄像头,并支持沉浸式、动态映射体验。代码库还包含一个使用 Direct3D 开发的简单投影映射示例。
  • RoomAlive 工具包且支持 Unity ——支持 Unity 的 RoomAlive 工具包包含一组 Unity 脚本和支持沉浸式和动态投影映射体验的工具,同时基于 ProCamCalibration 的投影相机校准。这个项目还包含一个工具流和渲染 Kinect 深度数据到 Unity。

以下是 RoomAlive 项目的一个示例场景(使用 6 台投影机和 6 个 Kinect 相机):

RoomAlive Scene更多内容见 Github :https://github.com/Kinect/RoomAliveToolkit


我们进入github看看介绍页面

RoomAlive Toolkit for Unity README

RoomAlive Toolkit for Unity contains is a set of Unity scripts and tools that enable immersive, dynamic projection mapping experiences, based on the projection-camera calibration from RoomAlive Toolkit.

The toolkit can be used to:

  • Bring Kinect depth and color images, skeleton data and audio into Unity. It includes Unity shaders that create Unity Mesh objects from depth images so your CPU is available for other tasks.
  • Perform projection mapping with multiple projectors and/or Kinect cameras.
  • Perform user view-dependent projection mapping on both static and moving scenes.

Components of This Toolkit:

  1. KinectV2Server (C#) – a standalone executable that streams Kinect data via TCP sockets to Unity and other applications.
  2. RoomAlive Toolkit scripts and shaders for Unity (C#) – provide capabilities to receive Kinect data, create Unity scenes based on pre-existing calibration data, as well as perform view-dependent projection mapping for tracked users. These scripts are organized in two parts:
    • AssetsRoomAliveToolkit – Required scripts and shaders.
    • AssetsRoomAliveToolkit_Examples – Optional scripts and scenes that contain pre-assembled example scenes. This folder can be safely omitted in new projects.

Prerequisites

  • Unity 5.5 (or better)
  • Visual Studio 2015 Community Edition (or better)
  • Kinect for Windows v2 SDK
  • ProCamCalibration from RoomAlive Toolkit (for obtaining projector and Kinect camera calibration)

Please note: The KinectV2Server project uses SharpDX and Math.NET Numerics packages. These will be downloaded and installed automatically via NuGet when RoomAlive Toolkit is built. Upon downloading the code, please build KinectV2Server project in Visual Studio.

Unity Package

RoomAlive Toolkit scripts can be easily imported into your Unity project using the pre-compiled Unity package. Alternatively, manually copy the contents of the Assets directory into your Unity project Assets directory.

Scene Examples

The following tutorials describe how to build a new RoomAlive Unity scene from scratch. If you’d rather jump in and see a basic pre-configured RoomAlive Toolkit scene in Unity, open the RoomAliveUnity project and look at the scenes in AssetsRoomAliveToolkit_ExamplesScenes. There are two example scenes:

  • TestRATScene1x1 – A complete example with one projector, one Kinect camera and one user with projection mapping enabled.
  • TestRATScene3x3 – Three projectors, three Kinect cameras and one user with projection mapping enabled.

Please note: These example scenes use the calibration and OBJ files from our office spaces, and will therefore not be correct for your space. In particular, projection mapping will not be correct.

Tutorial #1: Setting Up a Basic Scene with RoomAlive Toolkit for Unity

This tutorial demonstrates setting a Unity scene given a RoomAlive Toolkit calibration file. The scene consists of several game objects representing the following:

  • Kinect cameras in the room
  • projectors in the room
  • users in the room (typically acquired and tracked by Kinect’s skeletal tracking)
  • static geometry of the room (i.e., the OBJ file saved during calibration)
  • dynamic geometry of the room (i.e., the Unity Mesh objects assembled on the fly from streaming Kinect cameras)

A correctly configured scene includes many connections and dependencies among these game objects. While these can be configured manually in the editor, we recommend using the provided RATSceneSetup helper script which automates much of this process.

Room Calibration

The first step is to calibrate your set of projectors and cameras using the CalibrateEnsamble tool from RoomAliveToolkit.

Please follow the instructions here from RoomAlive Toolkit on how to calibrate your room setup.

If you have multiple cameras and/or multiple projectors, see the detailed instructions here.

It is important to place your cameras and projectors in the room to ensure that there is substantial overlap between projectors and cameras. Both the color camera and the depth cameras in each Kinect device must observe a good portion of the projected surface in order for calibration to succeed.

Once you have successfully calibrated with CalibrateEnsamble.exe:

  1. Save the resulting calibration (File->Save)
  2. Save the OBJ file of your configuration (File->Save to OBJ).
  3. Copy the resulting calibration and OBJ file (including the .xml, .obj, .jpg, and .mat files) into the AssetsResources{YourCalibrationName} directory of your Unity project.
  4. (optional) Close ProjectorServer.exe, CalibrateEnsamble.exe and KinectServer.exe. They are not needed anymore, but (if desired) they can be left running as they do not interfere with Unity.

Scene Setup

Create your Unity scene:

  1. Open a new scene and save it.
  2. Check that there is a RoomAliveToolkit directory in the Assets directory of your project.
  3. Check that you have copied the calibration file and OBJ files (including the .xml, .obj, .jpg, and .mat files) into the AssetsResources{YourCalibrationName} directory of your Unity project. (see Room Calibration section above). You could also use our sample calibration data (“officeTest”) if you do not have calibration handy.
  4. Create a new empty object in your scene and name it “MyRoom”.
  5. Disable (or delete) the MainCamera game object. It is not needed because each projector (and each user) is a camera.
  6. Reset the position and orientation of MyRoom object to ensure it is at the origin.
  7. Add the following two components (scripts) to the MyRoom object (you can use Add Component->RoomAliveToolkit->{ScriptName} in the Inspector to quickly find the scripts from the toolkit):
    • RATCalibrationData
    • RATSceneSetup
  8. Find the calibration xml file in your project view and drag it into Calibration field of RATCalibrationData component.
  9. In RATCalibrationData, press the Reload Calibration Data button. You should see Loaded: True message right below the button if the calibration data is successfully loaded.
  10. In RATSceneSetup, press the Use Default 3D Models button. You should see two 3D models linked in the editor fields, Kinect Model and Projector Model respectively.
  11. In RATSceneSetup, make sure that the component says Ready on the bottom. If not, press Reload Calibration Dataagain.
  12. If RATSceneSetup says Ready, press Build RoomAlive Scene button. This creates a complete scene in the MyRoom object.
  13. The scene should now include the Kinects and projectors at the locations in your room as determined by the calibration. The scripts should correctly connect all behaviors in the scene. Your MyRoom object inspector should look something like this: MyRoom Scripts
  14. To add the 3D model of your room into the scene, simply drag the obj file under MyRoom object.
  15. (optional) Add a directional light above so that you can see the object better.
  16. Inspect the scene hierarchy; it should include at least one Kinect object and one projector. In the scene views, both the projector and the Kinect will be visible as a 3D models, located at the exact location where they are physically in your room.
  17. If you are using our TestRATScene1x1 calibration (officeTest.xml and officeTest.obj) your scene will look something like this:

Test Scene in Unity

KinectV2Server: Streaming Kinect Data to Unity

After building the KinectV2Server project, start the executable KinectV2Server.exe on each PC connected to a Kinect camera. In contrast to the simple KinectServer.exe used with calibration, this server has a GUI that can be used to configure several streaming and encoding parameters.

KinectV2Server

Check that JPEG color compression is selected from the drop down menu.

Taking care that you are not in view of the Kinect, capture the background of your empty scene (Background->Acquire Background menu option). Save the configuration with File->Save Settings and leave the server running.

Running the Scene

At this point if you hit the play button in the Unity editor, the Scene view should display both the OBJ file and the real-time Kinect depth geometry from your Kinect camera. If you walk in front of the camera, you should see yourself in the Scene view. However, the Game view will be all black since projection mapping has not yet been configured.

How Does View-Dependent Projection Mapping Work?

View-Dependent Projection Mapping uses the projectors in the room to display virtual 3D content that appears perspectively correct from a single viewpoint (‘user view’). To perform the correct distortions of the projected image, projection mapping requires the precise calibration of the projectors, the position of the user’s head and the real geometry (objects and walls) of the room.

Here is a quick summary of various steps involved in view-dependent projection mapping:

  • A ‘user view’ off-screen render is performed. This is the ‘target’ or ‘desired’ visual the user should see after projection onto a possibly non-flat surface. When rendering 3D virtual objects, this will likely require the user’s head position.
  • A graphics projection matrix is assembled for each projector in the ensemble. This uses the projector intrinsics, and, because the principal point of the projector is most likely not at the center of the projected image, uses an ‘off-center’ or ‘oblique’ style perspective projection matrix.
  • The projector’s projection matrix is combined with calibrated projector and depth camera pose information to create a transformation matrix mapping a 3D point in the coordinate frame of a given depth camera to a 3D point in the projector’s view volume.
  • A second transformation matrix is assembled, mapping a point in a given depth camera’s coordinate system to the user’s view volume. This is used to compute the texture coordinates into the ‘user view’ (above) associated with each 3D depth camera point.
  • Vertex and geometry shaders use the above transformations to render a depth image to transformed vertices and texture coordinates for a given projector and a given depth camera. Essentially, the shaders render the receiving surface of the projected light, with a texture that is calculated to match the ‘user view’ from the user’s point of view, as projected by the projector.
  • A projector’s final rendering is performed by rendering each Kinect depth image using the above shaders. This procedure is performed for all projectors in the ensemble. Note that in this process, the depth images may be updated every frame; this is possible because the calibration and projection mapping process is fundamentally 3D in nature.

Tutorial #2: User Tracking and View-Dependent Projection Mapping

In Tutorial #1, we created a simple RoomAlive Toolkit scene. This tutorial extends the scene to include view-dependent projection mapping with a single tracked user.

This tutorial picks up from the last step in Tutorial #1.

Setup a Tracked User

We will add a game object representing the user to the scene. This object’s position and orientation will be updated by the Kinect. Furthermore, this object will include a Camera which will be used to perform offscreen rendering of the user’s view. This offscreen render target is then used in subsequent view-dependent projection mapping.

  1. In the MyRoom object’s RATSceneSetup component click on Add User button.
  2. This will add a new user game object under MyRoom, with the following components added and properly configured:
    • RATUser
    • RATUserViewCamera
    • RATProjectionPass (2x) – this script defines shaders used in two stages of projection mapping with real world geometry (e.g., the saved room 3D geometry or the realtime geometry from Kinect). The two stages are user view rendering and projection rendering.

Configuring Unity Layers

Projection mapping requires knowing which objects are to be rendering in each rendering pass. For example, users should typically only be viewing “virtual” objects since they can already see the real world (static room geometry). However; those virtual objects should be rendered (pasted on top of) some real world geometry which can in turn can be static (pre-acquired) or dynamic (acquired at run time from Kinect cameras). The RoomAlive Toolkit uses Unity layers to specify which objects are virtual, real world geometry, etc. Each Camera in the scene uses layer information to perform culling during projection mapping.

Our projection mapped scene requires four layers. Unity projects do not allow you to save and set layers automatically, so these layers must be manually created when a new scene is created.

Create four new layers in your scene (Inspector->Layers->Add Layer…), and name them:

  1. StaticSurfaces – existing static room geometry that is loaded from a file (OBJ file, for example)
  2. DynamicSurfaces – dynamic geometry that changes frame to frame and represents moving physical objects (from a Kinect camera, for example)
  3. Virtual3DObjects – virtual 3D objects that will be rendered for the user’s perspective
  4. VirtualTextures – virtual objects that should be texture mapped onto existing surfaces; these objects will be rendered as flat user-independent layers, like stickers on the physical geometry

    Layers

Next assign the scene objects to the appropriate layer:

  1. Find the root of the 3D scene model file (in our example “officeTest”) and assign that object (and all children) to the StaticSurfaces layer.
  2. Find all DepthMesh objects in the scene (basically any object with the RATDepthMesh component) and assign them (and all children) to the DynamicSurfaces layer.
  3. Find the objects in the scene that are to be projection mapped according to the user’s view, i.e., so as to appear as 3D object from the perspective of the user. Assign those objects to the Virtual3DObjects layer.
  4. Find the objects in the scene that you want to texture map on top of existing geometry without view-dependent rendering (e.g., a virtual map on the wall). Assign those objects to VirtualTextures layer.

Configure Culling Layers in User’s Projection Passes

The final step is to configure the culling masks of all different cameras in the scene (including the user’s view and all projectors).

  1. In RATProjectionManager (a component of MyRoom) set Texture Layers = VirtualTextures.
  2. (optional) Add a 3D object to the scene that you would like to projection map. For example, add a 3D cube to the scene and position it somewhere in front of your static geometry. Add this object to the Virtual3DObjects layer.
  3. (optional) Add one plane object, sized appropriately and placed in front of some wall in the scene. Add that plane to VirtualTextures layer. This is a view independent layer which will appear like a sticker in the scene,
  4. Each User object in your scene should have one RAT User View Camera component and two RAT Projection Pass components. Configure each as follows:
    • In RATUserViewCamera set Culling Mask = Virtual3DObjects
    • In the first RATProjectionPass (first added script) set Target Surface Layers = StaticSurfaces (make sure to uncheck Default). Then click on Set Static Defaults button.
    • In second RATProjectionPass (second added script) set Target Surface Layers = DynamicSurfaces (make sure to uncheck Default). Then click on Set Dynamic Defaults button.
  5. The User configuration should now look like this:

User Configuration

RATProjectionPass Explained

RATProjectionPass script defines the shaders to be used in two stages of projection mapping for real world geometry (e.g., the room 3D geometry or the depth image geometry from Kinect). The two stages are user view rendering and projection rendering.

In particular, for a specific layer or layers, the script specifies what shaders to use when rendering the user’s view into an offscreen render target (User View Shader) and also when doing the projection mapping for each projector (Projection Shader).

Each RATUserView can have multiple RATProjectionPass scripts attached. These can be controlled by inspecting the ‘ProjectionLayers’ list in the RATUserViewCamera inspector.

Normally in Unity, materials are used to specify shaders for a given bit of geometry. So why is the RATProjectionPass component needed? Why not just use different materials on scene objects?

Unity materials define the shaders used in rendering a particular object regardless of the camera. However, projection mapping requires different shaders to be used for the same object when it is rendered by a different camera in the scene. Think of projection passes like materials that operate on entire layers (i.e., multiple objects) and where the shaders used in rendering are selected depending on the camera.

Why different materials per camera? There are a few reasons. For example, we want to see the 3D room geometry in the Scene view with captured textures, but when rendering from the perspective of the user, we want the real world to be rendered black so that the projectors are not re-projecting the textures of the real objects on top of those real objects. As another example, consider that in the projection mapping pass, the colors of the geometry will be taken from the user view texture and not from the geometry itself.

Running the Scene

Here is the final scene graph for the assembled project (TestRATScene1x1), including a test FloorPlan model (Virtual3DObject layer) and an example 2D texture (VirtualTextures layer) on the wall.

Test Scene 1x1

Here is another example (TestRATScene3x3), this time with 3 projectors and 3 Kinect cameras. The scene includes the same 3D virtual objects: a test FloorPlan model and an example 2D texture on the wall.

Test Scene 3x3

If you run the project now, you should see some projection mapped object in your Game view. To learn how to move the Game window to the target display and thus see the scene projected directly on top of your room, read on the section below on handling the Game Window and Multi-Display Configuration.

If there is no tracked user in front of the Kinect camera, the projection mapping will be done from the perspective of the Kinect camera itself.

If the user is in front of the Kinect camera (and actively tracked) the projection mapping will be rendered from their perspective.

Here are a few things you can try if the projection mapping isn’t what you expect:

  1. Each RATUser can be given a Look At object to specify where the user is looking at. Try setting a small empty object in the scene somewhere in the middle of your captured geometry for the user to always focus on that.
  2. Sometimes the arrangement of Kinect and projector is not optimal for tracking the user. Consider manually rotating the Kinect camera both in the physical world and in the Unity scene by 180 degrees. This way (as long as you do not move your projector or rebuild your RoomAlive scene), you should be able to move and be tracked behind the projector and see the projection mapping projected on the wall. In this case, you probably should disable DepthMesh rendering.

Handling the Game Window and Multi-Display Configurations

Rendering correctly on multiple projectors is handled differently depending on whether you are running in the Unity Editor or as a standalone application (from a compiled executable).

Running the Game in the Unity Editor

If you run your scene in the Editor, the output will be displayed in the Game window.

First, make sure that the RATProjectionManager Screen Setup is set to Editor. Then move the Game window to the desired location on the projection display.

To assist in pixel precise alignment of the Game window with the projector, RoomAlive Toolkit for Unity contains a utility called RATMoveGameWindow (select Window->Move Game Window from the menu). Dock this tool window in your interface for best performance.

RATMoveGameWindow

In the RATMoveGameWindow tool set the desired position and size of the Game window and then press Move Game Windowbutton to move it there. These coordinates can be saved (and loaded) for your convenience.

Running on Multiple Projectors in the Unity Editor

While using the editor there is no way create multiple Game windows to render to multiple displays. Instead, arrange the displays contiguously in Windows and then span the Game window across them. For example, three projector displays can be arranged in a row so that the single Game window can span the displays.

Please note: there may be a maximum Game window width and height, so carefully tile the displays in Windows (potentially in multiple rows) to not exceed this limitation. This limitation does not seem to be present when the game is run as a standalone application (outside of the Unity Editor).

Setting up the Viewports

If the Game window spans multiple projectors, each projector must render only to a portion of that Game window. This is achieved by setting the correct screen viewports in RATProjectionManager (note the values are from 0 to 1.0, as a fraction of the window width or height).

Here are the viewports configured for a scene consisting of three projectors arranged side by side: Projection Viewports

Running as a Standalone Application

In the Editor set the RATProjectionManager Screen Setup to Multi Display. Then build your game.

Assuming that the configuration of projector displays has not changed between the time you ran your calibration and the time you run your game, each projector should now display the correct portion of the game.

If the arrangement has changed, you may need to manually edit the display numbers in calibration XML file to match the numbering used in Windows.

Recording and Playing Back Kinect Data

It is possible to record all the Kinect data to a file. This file can be played back as if it were streamed from a live camera. To do so, add RATKinectPlaybackController script to your Kinect game object in the scene. If you want to control multiple Kinects simultaneously, add the script to the parent containing all Kinects.

By controlling the Streaming Mode variable, you can control different aspects of playback (Read, ReadPreloaded, Write, and None). Here the editor is configured to Read mode:

Kinect Playback

Uncategorized我的作品

小明软件开发与收费

概述

该文档用于说明BrightGuo.com软件定制开发流程。撰写该文档,用于加强您对本小店的了解。

软件产品定制流程

以下描述使用淘宝购买宝贝进行软件订购开发的流程,如果非常信任我可以使用微信交易方式。(推荐淘宝方式,因为可以增加小店的积分)

步骤 名称 备注
1 需求讨论 商议要做什么,能做什么
2 确认需求 确定我将为您做什么,具体参见 需求章节
3 确认交付时间与费用 具体参见 开发时间费用章节
4 支付订金 支付订金后项目才开始开发,淘宝宝贝处于支付未发货状态。
5 等待开发,收到包含部分功能的试用软件 淘宝宝贝进入发货状态
6 确认是否完成基础需求,是否需要进行修改或调整
7 修改后再次提交试用版,确认后支付剩余20%费用 新的淘宝宝贝处于已支付未发货状态
8 最终版本开发完毕,提交给您 宝贝处于发货状态
9 确认最终成品是否需要再次修改 交付的内容参见 交付的产品
10 确认接收最终成品,需要开发票请在支付订金前告知 项目开发结束
11 软件bug可持续修复,使用问题可持续沟通 期待再次合作

以上是理想情况下的开发流程,而实际总会遇到各种情况,参见 费用 章节中的 意外

为何会给别人做开发

自2013年6月毕业以来,本人利用业余时间在个人网站brightguo.com上写下许多技术文章和技术资讯。很多人因此给我留言,或者发邮件交流技术问题,也有人找我开发软件。目前已为广大学生、研究者、各类公司开发过几十款软件。其中有意思的,可以在这个页面找到。

我有多少开发经验

四年软件开发工作经验。

2007年本科开始学习软件开发,微软2010年发布kinect sdk beta后,我的硕士毕设也因此开启。而后LeapMotion、Myo等发布时,我也迅速购买进行开发。2013年6月毕业后开始进入上海企业进行软件研发工作,业余时间鼓捣个人软件开发小项目。

还有谁和我一起做开发

目前还有个学弟开了体感公司(上海噶炽),手下有几个开发,他们全职从事Kinect体感交互软件开发,使用WPF和Unity3D开发二维和三维软件。

可以提供哪些类型软件开发

  1. 体感交互类软件(Kinect一代二代、LeapMotion、Myo)
  2. 传统Windows下的PC软件开发
  3. 简单的Unity3D游戏程序开发
  4. Python开发
  5. OpenCV图像算法开发

由于结缘许多开公司的朋友,他们能提供更加专业的体感类、AR VR类游戏、交互软件开发。当我无法完成某些类型开发时,或者时间不允许时,会推荐这些朋友。

我们暂时不做的类型

华丽复杂场景的交互游戏开发,VR开发

学生

每到过完春节的上半年,都会有不少同学因为毕设联系我。

  1. 学生的项目收取的费用会比较低,但目前我只做几千左右的毕设。
  2. 学生的软件一般会提供源代码,代码力求最简单易懂,以完成毕设为目标。软件的稳定性、模块化、可维护性要求不高。需要您自己多测试,对代码反复阅读并和我交流。
  3. 提倡合作方式完成毕设,我只完成软件一部分功能,讲解后您可以自己修改完成后面的任务。如,与艺术生合作,她提供想法,三维模型,二维图像素材,我负责软件实现;与硬件学生合作,我提供手势命令,他将这些命令作用于机器人。
  4. 欢迎学生邮件与我交流技术想法,如果自己能实现最好。

交付的产品

  1. 交付可执行软件产品,复杂的软件还会提供使用文档,一般不提供源码
  2. 需要源码需要支付数倍于开发费用的价格。如,普通的软件价格会翻倍,有核心算法的,会收取十倍左右费用。
  3. 软件默认只授权一台电脑使用,多台电脑使用需要商讨价格。

开发时间

一般只接一个月内结束的需求,对于学生毕设可以放宽时限。当超过一个月的项目会介绍给其他人做。

需求

需求的确认

确认开发前,需要商量需求(可以将需求发到我的邮箱),给出价格和预计开发时间。简单的需求只进行口头确认,我这里进行简要记录。复杂的需求我和您沟通后,我会写文档与您确认。(您也可以直接发需求文档给我)

费用

收费标准

  1. 不做低于¥500的软件开发
  2. 订金: 软件开发的总费用80%,建议在我的小明体感软件开发店交易
  3. 尾款:收到可接受的试用软件。
  4. 语音讲解: 200元/小时。支付订金后,可享有每¥1000元1小时免费讲解时间。在国内会直接打电话进行沟通,在国外使用qq等工具语音交流,通讯费用由我承担。
  5. 需要我们的人员去现场技术支持: 根据距离远近,收取¥500~¥1000/天人员工资费用,该费用不包括食宿和交通费用。

收费参考

  1. 一个LeapMotion手势收费¥1000~¥1500
  2. 一个Kinect图像特效收费¥500~¥1000
  3. 淘宝上宝贝售价

意外

总有意外的事情会发生

  1. 如项目被砍/课题不做了,未达到理想的需求(这点可以在需求讨论与确认阶段避免,目前还未发生过)。这时根据软件已经形成的规模给予部分退款。比如预计五天完成,我做了一天开发后,第二天由于特殊原因您想不做了,我会退已付金额的80%。
  2. 我遇到技术难题无法攻克并且您要求必须实现该效果。这种情况,我会全额退款。

联系方式

  1. 时间:工作日20:00后(北京时间) ,白天可以进行简短的交流。
  2. 可以写邮件给我 i@brightguo.com,还可以加我的个人微信 guoming0406或者个人qq 374704388。推荐使用邮件方式。

其他

需求调整

在交付前,修改需求比较大时,协商是否要增加开发费用,是否延迟软件交付时间。

需要我去现场技术支持

本人目前在上海工作,仅仅在非工作日可以出去支持,如果工作日需要支持,我会让学弟公司人员前去支持。

隐私保护与保密

  1. 软件具有可复制性,凡是在这里购买的软件,可能会售卖给其他人。如需要特别的竞业限制(如不希望同行拿到该软件),需要支付相应的费用。隐私保护费用与开发费用相等。
  2. 开发后软件会有几个月的保密期(不主动推广该软件,免费时效不大于3个月)。例如保密到毕业答辩后,保密到某某发布会结束。如需要更长的保密时间,需要协商支付费用。

软件或源代码请勿分发至互联网

  1. 软件和源代码一旦购买,可以自由使用,朋友间故意分发,但请勿上传至互联网共享
KinectUncategorized

华硕即将推出Xtion二代

今天突然登录上wordpress.com就看到heresy老师的新文章,让人惊讶终于又来了新的体感了。

出乎意料,華碩預告將推出第二代 Xtion 深度感應器

1d ago


以目前的資訊來說,還看不出什麼資訊?只能說看來新的感應器應該不大,而且應該是同時有深度影像、以及彩色影像了。至於是採用哪種技術、解析度多少、針對哪種型態設計的、開發環境是什麼,這些資訊都沒完全沒有提及;僅有在圖片下,有寫著請聯繫 xtionapp@asus.com 的字樣。

到底什麼時候會有進一步消息呢?就繼續等看看吧。