Tag: 上海品茶夜网

What’s the Difference Between Energy and Power?

first_imgHow many mystery writers does it take to change a 60-watt lightbulb?Two — one to screw the bulb almost all the way in, and one to provide a surprising twist at the end.How many Energy Solutions columnists does it take to change a 60-watt lightbulb?One — all he does is tell you what a watt is and he doesn’t even change the lightbulb.If you are still with me, dear reader, you have chosen the columnist over the mystery writer. You are brave. (Or if you are just into lightbulb jokes, see my green building lightbulb jokes here.)Chances are, you may be worried about higher energy prices, global warming, energy insecurity, or all of the above and more. You may want to become more aware of your energy use, and become more efficient. A few weeks ago I wrote about radiation terminology; today I’m going to focus on energy terminology. Knowing is half the battle. Watts measure power — kilowatt-hours measure energyWhen you get your utility bill, the electricity you’ve used is measured in kilowatt -hours (kWh). While a watt is a measure of power, a kWh is a measure of energy. Energy is defined as the capacity to do work, such as creating heat, light, or motion. If you run a 60-watt lightbulb for one hour, you’ve used 60 watt-hours, or 0.06 kilowatt-hours, since a kWh is 1,000 watt-hours. In other words, 0.06 kWh is the amount of energy you need to run a lightbulb for an hour.Homes are typically charged only for the electricity they use, measured in kWh. But commercial and industrial facilities also pay “demand charges,” which are calculated based on their peak power draw (usually measured in megawatts, or MW), which compensates the electric utility for ensuring that it has enough power available to meet that demand. Appliances are rated based on powerBoilers and furnaces are also sized based on their heating power, in Btus per hour in the U.S., and in kilowatts elsewhere. A typical residential unit puts out 100,000 Btu/hr (29 kW) while commercial units tend to be much more powerful. A 100,000 Btu/hr boiler burning at full power for a day will produce 2.4 million Btus of heat (700 kWh).How much energy does your entire home or workplace use? Overall energy consumption of buildings, including both electricity and other fuels, is typically counted in million Btus per year (which is usually abbreviated as MMBtu — don’t ask).To get a single whole-building MMBtu number, we have to convert all fuel sources into that unit, and then add them up. You can find conversion ratios for doing this online, which include all fuel sources you might use, such as propane, cordwood, natural gas, coal, and others. The Home Energy Yardstick from the federal Energy Star program makes this really easy. Watts are like miles-per-hourLet’s start with that 60-watt lightbulb. Power is a measure of the rate at which energy flows, and in electrical systems it is measured in watts (W). Watts are basically the miles-per-hour measurement of the electrical world — they tell you how fast the electrons are speeding down the highway. For those who are keeping track, one watt is equivalent to electricity flowing at a rate of one joule per second in the metric system, which is also equivalent to 3.4 Btus per hour.A 60-watt lightbulb will consume electricity at a rate of 60 watts. A laborer working through the day will put out 75 watts of power. A medium-sized car might consume 100,000 watts. (One horsepower is equivalent to 750 watts, so that’s a 286-hp car.) A small gasoline generator puts out 2,000 watts; the Vermont Yankee nuclear power plant puts out 650 megawatts, or 650,000,000 watts. Many other pieces of equipment come with power ratings to describe the rate at which they use energy. Is replacing windows a waste of money?Dear Energy Solutions, I have heard that replacing windows is a waste of money. Is that really true? – PaulaDear Paula, A lot of people have the idea that replacing old windows is one of the first things they should do in an energy renovation of an old building. There are a lot of good reasons to replace old windows, and not all of them are about energy. These include aesthetics, maintenance issues, and comfort. (Yes, I consider comfort and energy to be different considerations. It can be very uncomfortable to sit next to an older single- or double-pane window in the winter, but in absolute terms, that window may not be costing you a ton of energy.) Any of these factors, along with overall energy use, may point you toward window replacement.I would slow down and look at other options, however. Analysis of “payback” is tricky, but there are credible calculations showing payback, based on energy saved, for window replacement, of 10–40 years. For most people, that’s a long time. There may be other measures, such as sealing up air leaks, that will improve your comfort and finances much faster. Rehabbing older windows, or simply adding storm windows, can also be very cost-effective, with paybacks of under 10 or even five years.What are your thoughts, questions, or comments on energy metrics, operating a solar array, and replacing windows? Please discuss below.Tristan Roberts is Editorial Director at BuildingGreen, Inc., in Brattleboro, Vermont, which publishes information on green building solutions. Read more Energy Solutions columns, including columns by Alex Wilson, for whom Tristan is filling in, on the Energy Solutions homepage. You can also keep up with Alex’s adventures on sabbatical at ATWilson.com. Getting to know metrics hands-onI got to know some of these metrics hands-on after I installed our solar photovoltaic power system. The system includes a charge controller which takes in power from the solar panels and feeds it to the batteries for storage. The controller includes a digital readout telling me exactly what the system is producing at any given time. I have 1,050 watts of panels, which means that under optimal conditions they are rated to produce that much power.In reality, the power on the readout is constantly changing as the sun and temperature conditions change. Solar panels like it cool, so on a nice cool, sunny April day, I might get a peak reading of 1,365 watts. Right now, with some clouds dancing across the sky, it’s reading 723 watts.As you know, watts measure rate. When you see a [no-glossary]cop[/no-glossary] on the Interstate and take your foot off the gas, you might drop from 70 mph to 65 mph in a few seconds. The same thing with the solar panels and sun. How far you have driven at the end of the day is determined by your average speed, and how long you drove. As I write this, my solar array has been online for 6:01, and has produced 2.13 kWh in that time. That’s about enough power to have done a load of dishes in my dishwasher. Comparing from one building to anotherIn order to compare energy use from one building to another, we typically normalize it by the building’s floor area, giving us energy numbers in thousand Btus per square foot per year (kBtu/ft2·yr). The average onsite energy use for office buildings in the U.S. is 76.3 kBtu/ft2·yr. The average for single-family detached homes is 43.8 kBtu/ft2·yr. (For multi-family homes of five-plus units, it’s 49.5; for mobile homes it’s 73.4 kBtu/ft2·yr. If these numbers look surprisingly high compared with single-family homes, keep in mind that we’re talking per-square-foot, not per-home.If all this feels intimidating, remember that (if you grew up in the U.S.) you’ve managed to master the arbitrary system of inches, feet, and yards. These energy metrics are much simpler!last_img read more

Read more…

Agritrade Resources Wraps Up VLCC Purchase

first_imgzoom Hong Kong-based Agritrade Resources Limited has finalized the purchase of a very large crude carrier (VLCC) from Marshall Islands-based shipping company Chris Tanker Corporation.Featuring a capacity of 309,300 dwt, the VLCC, which is classified by Lloyd’s Register, was constructed in December 2001.The VLCC is scheduled to join its new owner in January 2017.The company said that its wholly-owned subsidiary Fair Cypress Limited purchased the oil tanker at a consideration of USD 23.7 million.The parties earlier said that the USD 2.37 million will be paid as a deposit within three business days upon the signing of the agreement, while the remaining amount of USD 21.33 million will be paid upon the delivery of the VLCC.Following the completion of the deal, the group will own three VLCCs, “which would contribute stable, sustainable and diversified income and cash flows to the group on a long-term basis,” Agritrade Resources said.last_img read more

Read more…

How to Build TensorFlow Models for Mobile and Embedded devices

first_imgTensorFlow models can be used in applications running on mobile and embedded platforms. TensorFlow Lite and TensorFlow Mobile are two flavors of TensorFlow for resource-constrained mobile devices. TensorFlow Lite supports a subset of the functionality compared to TensorFlow Mobile. It results in better performance due to smaller binary size with fewer dependencies. The article covers topics for training a model to integrate TensorFlow into an application. The model can then be saved and used for inference and prediction in the mobile application. This article is an excerpt from the book Mastering TensorFlow 1.x written by Armando Fandango. This book will help you leverage the power of TensorFlow and Keras to build deep learning models, using concepts such as transfer learning, generative adversarial networks, and deep reinforcement learning. To learn how to use TensorFlow models on mobile devices, following topics are covered: TensorFlow on mobile platforms TF Mobile in Android apps TF Mobile demo on Android TF Mobile demo on iOS TensorFlow Lite TF Lite demo on Android TF Lite demo on iOS TensorFlow on mobile platforms TensorFlow can be integrated into mobile apps for many use cases that involve one or more of the following machine learning tasks: Speech recognition Image recognition Gesture recognition Optical character recognition Image or text classification Image, text, or speech synthesis Object identification To run TensorFlow on mobile apps, we need two major ingredients: A trained and saved model that can be used for predictions A TensorFlow binary that can receive the inputs, apply the model, produce the predictions, and send the predictions as output The high-level architecture looks like the following figure: The mobile application code sends the inputs to the TensorFlow binary, which uses the trained model to compute predictions and send the predictions back. TF Mobile in Android apps The TensorFlow ecosystem enables it to be used in Android apps through the interface class  TensorFlowInferenceInterface, and the TensorFlow Java API in the jar file libandroid_tensorflow_inference_java.jar. You can either use the jar file from the JCenter, download a precompiled jar from ci.tensorflow.org, or build it yourself. The inference interface has been made available as a JCenter package and can be included in the Android project by adding the following code to the build.gradle file: allprojects  {repositories  {jcenter()}}dependencies  {compile  ‘org.tensorflow:tensorflow-android:+’} Note : Instead of using the pre-built binaries from the JCenter, you can also build them yourself using Bazel or Cmake by following the instructions at this link: https://github.com/tensorflow/tensorflow/blob/r1.4/ tensorflow/contrib/android/README.md Once the TF library is configured in your Android project, you can call the TF model with the following four steps:  Load the model: TensorFlowInferenceInterface  inferenceInterface  =new  TensorFlowInferenceInterface(assetManager,  modelFilename);  Send the input data to the TensorFlow binary: inferenceInterface.feed(inputName, floatValues,  1,  inputSize,  inputSize,  3);  Run the prediction or inference: inferenceInterface.run(outputNames,  logStats);  Receive the output from the TensorFlow binary: inferenceInterface.fetch(outputName,  outputs); TF Mobile demo on Android In this section, we shall learn about recreating the Android demo app provided by the TensorFlow team in their official repo. The Android demo will install the following four apps on your Android device: TF  Classify: This is an object identification app that identifies the images in the input from the device camera and classifies them in one of the pre-defined classes. It does not learn new types of pictures but tries to classify them into one of the categories that it has already learned. The app is built using the inception model pre-trained by Google. TF  Detect: This is an object detection app that detects multiple objects in the input from the device camera. It continues to identify the objects as you move the camera around in continuous picture feed mode. TF  Stylize: This is a style transfer app that transfers one of the selected predefined styles to the input from the device camera. TF  Speech: This is a speech recognition app that identifies your speech and if it matches one of the predefined commands in the app, then it highlights that specific command on the device screen. Note: The sample demo only works for Android devices with an API level greater than 21 and the device must have a modern camera that supports FOCUS_MODE_CONTINUOUS_PICTURE. If your device camera does not have this feature supported, then you have to add the path submitted to TensorFlow by the author: https://github.com/ tensorflow/tensorflow/pull/15489/files. The easiest way to build and deploy the demo app on your device is using Android Studio. To build it this way, follow these steps:  Install Android Studio. We installed Android Studio on Ubuntu 16.04 from the instructions at the following link: https://developer.android.com/studio/ install.html  Check out the TensorFlow repository, and apply the patch mentioned in the previous tip. Let’s assume you checked out the code in the tensorflow folder in your home directory.  Using Android Studio, open the Android project in the path ~/tensorflow/tensorflow/examples/Android. Your screen will look similar to this:  Expand the Gradle Scripts option from the left bar and then open the  build.gradle file.  In the build.gradle file, locate the def  nativeBuildSystem definition and set it to ‘none’. In the version of  the code we checked out, this definition is at line 43: def  nativeBuildSystem  =  ‘none’  Build the demo and run it on either a real or simulated device. We tested the app on these devices: 7.  You can also build the apk and install the apk file on the virtual or actual connected device. Once the app installs on the device, you will see the four apps we discussed earlier: You can also build the whole demo app from the source using Bazel or Cmake by following the instructions at this link: https://github.com/tensorflow/tensorflow/tree/r1.4/tensorflow/examples/android TF Mobile in iOS apps TensorFlow enables support for iOS apps by following these steps:  Include TF Mobile in your app by adding a file named Profile in the root directory of your project. Add the following content to the Profile: target  ‘Name-Of-Your-Project’pod  ‘TensorFlow-experimental’  Run the pod  install command to download and install the TensorFlow Experimental pod.  Run the myproject.xcworkspace command to open the workspace so you can add the      prediction code to your application logic. Note: To create your own TensorFlow binaries for iOS projects, follow the instructions at this link: https://github.com/tensorflow/tensorflow/ tree/master/tensorflow/examples/ios Once the TF library is configured in your iOS project, you can call the TF model with the following four steps:  Load the model: PortableReadFileToProto(file_path,  &tensorflow_graph);  Create a session: tensorflow::Status  s  =  session->Create(tensorflow_graph);  Run the prediction or inference and get the outputs: std::string  input_layer  =  “input”; std::string  output_layer  =  “output”; std::vector  outputs; tensorflow::Status  run_status  =  session->Run({{input_layer,  image_tensor}},{output_layer},  {},  &outputs);  Fetch the output data: tensorflow::Tensor*  output  =  &outputs[0]; TF Mobile demo on iOS In order to build the demo on iOS, you need Xcode 7.3 or later. Follow these steps to build the iOS demo apps:  Check out the TensorFlow code in a tensorflow folder in your home directory.  Open a terminal window and execute the following commands from your home folder to download the Inception V1 model, extract the label and graph files, and move these files into the data folders inside the sample app code: $ mkdir -p ~/Downloads$ curl -o ~/Downloads/inception5h.zip https://storage.googleapis.com/download.tensorflow.org/models/incep tion5h.zip && unzip ~/Downloads/inception5h.zip -d ~/Downloads/inception5h$ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/benchmark/data/$ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/camera/data/$ cp ~/Downloads/inception5h/* ~/tensorflow/tensorflow/examples/ios/simple/data/  Navigate to one of the sample folders and download the experimental pod: $ cd ~/tensorflow/tensorflow/examples/ios/camera$ pod install  Open the Xcode workspace: $ open tf_simple_example.xcworkspace  Run the sample app in the device simulator. The sample app will appear with a Run Model button. The camera app requires an Apple device to be connected, while the other two can run in a simulator too. TensorFlow Lite TF Lite is the new kid on the block and still in the developer view at the time of writing this book. TF Lite is a very small subset of TensorFlow Mobile and TensorFlow, so the binaries compiled with TF Lite are very small in size and deliver superior performance. Apart from reducing the size of binaries, TensorFlow employs various other techniques, such as: The kernels are optimized for various device and mobile architectures The values used in the computations are quantized The activation functions are pre-fused It leverages specialized machine learning software or hardware available on the device, such as the Android NN API The workflow for using the models in TF Lite is as follows: Get the model: You can train your own model or pick a pre-trained model available from different sources, and use the pre-trained as is or retrain it with your own data, or retrain after modifying some parts of the model. As long as you have a trained model in the file with an extension .pb or .pbtxt, you are good to proceed to the next step. We learned how to save the models in the previous chapters. Checkpoint the model: The model file only contains the structure of the graph, so you need to save the checkpoint file. The checkpoint file contains the serialized variables of the model, such as weights and biases. We learned how to save a checkpoint in the previous chapters. Freeze the model: The checkpoint and the model files are merged, also known as freezing the graph. TensorFlow provides the freeze_graph tool for this step, which can be executed as follows: $ freeze_graph–input_graph=mymodel.pb–input_checkpoint=mycheckpoint.ckpt–input_binary=true–output_graph=frozen_model.pb–output_node_name=mymodel_nodes Convert the model: The frozen model from step 3 needs to be converted to TF Lite format with the toco tool provided by TensorFlow: $ toco–input_file=frozen_model.pb–input_format=TENSORFLOW_GRAPHDEF–output_format=TFLITE–input_type=FLOAT–input_arrays=input_nodes–output_arrays=mymodel_nodes–input_shapes=n,h,w,c  The .tflite model saved in step 4 can now be used inside an Android or iOS app that employs the TFLite binary for inference. The process of including the TFLite binary in your app is continuously evolving, so we recommend the reader follows the information at this link to include the TFLite binary in your Android or iOS app: https://github.com/tensorflow/tensorflow/tree/master/ tensorflow/contrib/lite/g3doc Generally, you would use the graph_transforms:summarize_graph tool to prune the model obtained in step 1. The pruned model will only have the paths that lead from input to output at the time of inference or prediction. Any other nodes and paths that are required only for training or for debugging purposes, such as saving checkpoints, are removed, thus making the size of the final model very small. The official TensorFlow repository comes with a TF Lite demo that uses a pre-trained mobilenet to classify the input from the device camera in the 1001 categories. The demo app displays the probabilities of the top three categories. TF Lite Demo on Android To build a TF Lite demo on Android, follow these steps: Install Android Studio. We installed Android Studio on Ubuntu 16.04 from the instructions at the following link: https://developer.android.com/studio/ install.html Check out the TensorFlow repository, and apply the patch mentioned in the previous tip. Let’s assume you checked out the code in the tensorflow folder in your home directory. Using Android Studio, open the Android project from the path ~/tensorflow/tensorflow/contrib/lite/ java/demo. If it complains about a missing SDK or Gradle components, please install those components and sync Gradle. Build the project and run it on a virtual device with API > 21. We received the following warnings, but the build succeeded. You may want to resolve the warnings if the build fails: Warning:The  Jack  toolchain  is  deprecated  and  will  not run.  To  enable  support  for  Java  8 language  features  built into  the  plugin,  remove  ‘jackOptions  {  …  }’  from  your build.gradle  file, and  add android.compileOptions.sourceCompatibility  1.8 android.compileOptions.targetCompatibility  1.8 Note:  Future  versions  of  the  plugin  will  not  support  usage ‘jackOptions’  in  build.gradle. To learn  more,  go  to https://d.android.com/r/tools/java-8-support-message.html Warning:The  specified  Android  SDK  Build  Tools  version (26.0.1)  is  ignored,  as  it  is  below  the minimum  supported version  (26.0.2)  for  Android  Gradle  Plugin  3.0.1. Android  SDK  Build  Tools 26.0.2  will  be  used. To  suppress  this  warning,  remove  “buildToolsVersion ‘26.0.1’”  from  your  build.gradle  file,  as  each  version  of the  Android  Gradle  Plugin  now  has  a  default  version  of the  build  tools. TF Lite demo on iOS In order to build the demo on iOS, you need Xcode 7.3 or later. Follow these steps to build the iOS demo apps:  Check out the TensorFlow code in a tensorflow folder in your home directory.  Build the TF Lite binary for iOS from the instructions at this link: https://github.com/tensorflow/tensorflow/tree/master/tensorflow/contrib/lite  Navigate to the sample folder and download the pod: $ cd ~/tensorflow/tensorflow/contrib/lite/examples/ios/camera$ pod install  Open the Xcode workspace: $ open tflite_camera_example.xcworkspace  Run the sample app in the device simulator. We learned about using TensorFlow models on mobile applications and devices. TensorFlow provides two ways to run on mobile devices: TF Mobile and TF Lite. We learned how to build TF Mobile and TF Lite apps for iOs and Android. We used TensorFlow demo apps as an example. If you found this post useful, do check out the book Mastering TensorFlow 1.x  to skill up for building smarter, faster, and efficient machine learning and deep learning systems. Read Next: The 5 biggest announcements from TensorFlow Developer Summit 2018 Getting started with Q-learning using TensorFlow Implement Long-short Term Memory (LSTM) with TensorFlowlast_img read more

Read more…

Billboardjs 150 releases with new radar type axis improvements and more

first_imgBillboard.js, the reusable JavaScript chart library backed by D3.js, has released version 1.5.0. Billboard.js provides the easiest way to create a Billboard chart instantly. The new version comes with 7 major improvements and a hoard of additional bug-fixes. The new radar type chart support has been added to this version for better data visualization. You can use ‘radar’ type, by the set data.types option value. You can also customize these radar types to get a different variation of the visual data. Different radar types There is also a new way to customize and use axes tick’s text value using axis.[x|y|y2].tick.text.position. For this, you need to just set the position pixel for x and y coordinate value. Thereafter, every value is treated relatively as the original position. Billboard.js also features a new axis.[x|y].clipPath option which can be used along with tick’s text position option. Generally, the clip-path attribute makes sure that the axes elements are clipped to not surpass the actual axes area. However sometimes, the tick texts aren’t visible due to the clip-path attribute. This is where axis[x|y]. clipPath option comes to play. There is also improved lining for x-axis. Now the users can put the line on the exact position they want. For this, just put \n character where you want your chart to be lined when you bind the category names for data. Improved lining for x-axis Billboard.js also has a new tooltip.linked.name to allow linking charts to particular name groups. So for instance, four charts with two different name groups will be interacting with only the same linked name value. linked tooltip with grouped name Read the release notes for additional feature releases and bug fixes. Jae Sung Park, the creator of Billboard.js states that, the next release will feature Multiple Axes and Themed CSS file. Read Next Chart Model and Draggable and Droppable DirectivesBuilding Motion Charts with TableauHow to create a Treemap and Packed Bubble Chart in Tableaulast_img read more

Read more…