20190318_fbx_to_fbx.fmw20180318_appearances_table.xlsx
Hello,
From a colleague I received an export from Navisworks in fbx (originally created in Inventor).The model consists out of many objects,which I would like treat separately.
Therefore I added an identifier to the attribute table.After finishing the workbench in FME for an individual object,I want to create an iterative process to perform an analysis for all objects (by using a loop-transformer).
The geometry of an object can be de-aggregated to parts.Of each part,the appearance can be extracted.However,the table contains duplicate values of material.I wish to merge rows with the same value of the field "fme_style_appearance_name".
Simultaneously (this may be a misunderstanding from my side),geometry with corresponding part numbers must be merged as well.
How can this be realised?It is possible to identify duplicate values by using the matcher-transformer,but unfortunately I don't know how to continue the process.
Kind regards,
Pim van der Zwaag.
FME versie 2018.1.1.2 (20190121),build 18586 (64 bit)
(the fbx-file can be provided..)
I've been experimenting with writing out 3DTiles and gLTF/GLB models with FME.One issue I've run into is that FME seems to be creating some kind of shininess to the surfaces which I can't seem to get rid of.The model just has a single color appearance which I want to be able to cast shadows if there is a light source,however,I don't want the appearance to be 'shiny',it should just be dull.
I've set my appearance to be a single Diffuse (I've also tried Ambient) color.I've set the shininess to 0 and for I've set the Opacity to 1.
This is in my resulting gLTF file,FME has given a 'metallicFactor' to the model which I'm guessing is why there is a shine to it:
"materials" : [{"name" : "[Color M02]","pbrMetallicRoughness" : {"baseColorFactor" : [ 0.7764705882352941,0.7764705882352941,0.7764705882352941,1 ],"roughnessFactor" : 1,"metallicFactor" : 0.5}}],This also comes through into my 3D Tile when creating them.I see the same properties in the b3dm files in my 3D Tiles set.
Hi,
I am new to the appearance setter and while I having no trouble applying a colour using the appearancesetter,I am having trouble applying a texture.Can someone please me show me how?
Attached is a FFS file with 9 roof geometries and 9 images,both having a _count attribute that references them to one another.output.zip
Thanks is advance,
This example converts an FBX model into KML for viewing in Google Earth.We will be georeferencing a 3D Duplex building and placing it on the soccer field of Unwin Park in Surrey,BC.The workspace preserves the original textures in the FBX model.We will also be customizing KML properties such as setting a pop-up balloon in Google Earth when the model is selected.
The source data is an FBX model of a duplex originally from buildingSmart Alliance'sCommon Building Information Model Files.Download the template workspace or read in the duplex_A.fbx dataset using the FBX reader in FME.
The FBX source duplex model viewed in the FME Inspector.
The ideal geometry to write out to KML is a single mesh.We will use theTriangulatorto first break down the FBX geometry into a mesh,and then theMeshMergerto unify the triangular units into a single output mesh.The Meshmerger ensures that we are storing the geometry in the most efficient way,and also makes the translation significantly faster.Without this step,the geometry output cannot be drawn in Google Earth.
Use theKMLPropertySetterto customize the balloon pop-up in the output KML.Here,we will specify a name,description,and file summary for the model.Notice that the Content will be visible in the balloon pop-up while the Summary is only visible in the Google Earth toolbar ‘Places'.For these parameters I have used:
While these are the parameters that I have used,feel free to get creative and use your own text.
We will use theLocalCoordinateSysemSetter to set the origin location of our model on Earth,which we have chosen as Unwin Park in Surrey,BC.The point 0,0 on our FBX model is the corner of the building on the ground,and this is the point that will be georeferenced.
Normally,to write to KML,data must always be reprojected to the lat/long coordinate system LL-WGS84.Since the FBX model has no current coordinate system,we will set the origin coordinate system to LL-WGS84,which is the FME equivalent to the WGS84 datum,the same one that supports Google Earth and Google Maps.Our source data is tagged with the coordinate system information as it goes through this transformer.
The coordinates for Unwin Park have been collected from Google Maps,and are in latitude and longitude.
Utilize the KML writer by running the translation.If you are creating your workspace from scratch,remember to set Workspace parameters to Continue Translation in "Rejected Feature Handling".You can find this by going through Workspace Parameters in your Navigator,and selecting Translation.
Due to a known issue,PR# 52475,you may need to first read in a SketchUp file using the Trimble SketchUp Reader and run the translation to convert it to KML,before swapping out the reader for the FBX reader.This will prime your workspace to retain the FBX textures in your KML output.Please try this workaround if your output KML buildings appear all gray.You can use this SketchUp file:duplex-A-skp.zip
Open up your output in Google Earth.Congratulations!You have georeferenced your FBX model and preserved its original textures,adding on some style by customizing the pop-up balloon.Below,you can see the duplex model placed successfully in Unwin Park in Surrey,BC.
The output KML properly georeferenced in Unwin Park with a balloon pop-up description.
A closer look at the output model in Google Earth showing the original textures from the FBX model.
Hi all,
I'm trying to extract 3D data from a3DCityDB (PostgreSQL) and write them into a sketchup file.
I have no problem handling the surfacesand writing them into the .skp file as buildings.
But I have an issue with the textures.Imanage to get the right images on the right surfaces but they are not inthe right place.
I get the texture coordinates in afield,and they are stored as below (u,v coordinates list) :
But I don't know what to do with thesecoordinates...And how ta handle them with TextureCoordinateSetter...Any ideas ?
The idea would be to create a customreader for the 3DCityDB format and a workbench to extract data from the DB insktechup,multipatch,… It's already done without the textures,but it would begreat (and usefull) to get the textures right.
Thank you in advance,
Fabien Laganne
I can't find any options to specify anything texture-related.Saving to mulitpatch through a workspace ends up with blank geometries,saving to multipatch in the inspector does the same.I found a sample workspace* but this again ignores the textures.Is this not possible with FME?
* https://hub.亚搏在线safe.com/templates/citygml-to-esri-file-geodatabase
In the tutorial2D to Simple 3D Model: Extrude Building Outlines,3D extruded solid models were created from building footprints and heights.However,each building had the same appearance on the entire model,and often we want to set a different appearance on the roof than we do on the wall.
Workspace:texturewall.fmw
The attached workspace illustrates a technique to convert the extruded solid to a brep solid,then separate the walls from the ground and roof using the z component of the vertex normals.
A copy of the bounding faces are extracted from the solid,examined to determine their normal z,then converted to Appearances of different colors to be assigned to the faces of the original solid.The vertex normals are stored as measures on the polygon boundary of each face,so we need to go deep into the geometry tree to extract them.
The key parts to this workspace are:
GeometryValidator- to create the vertex normals.Vertex normals created on a solid will always point away from the inside of a solid,so the roof normals will point up,the ground normals will point down,and the wall normals will point to the side.
GeometryPropertySetter- use a counter to create a unique id trait on each bounding face of the solid.The Geometry Xquery allows us to manipulate the faces of the solid without needing to break it up first.
GeometryPartExtractor- to extract the bounding faces from the solid,then to extract the bounding polygons from each of the faces.
MeasureExtractor- to extract the z component of the normal from the bounding polygon.
TestFilter- to separate the polygons into wall,roof and ground surfaces.
AppearanceStyler- creates a color appearance for wall,roof and ground.
AppearanceSetter- to set the appearance on each face of the solid,based on the id trait.Geometry Xquery is used to set the appearance on the faces without breaking up the solid.
Output viewed in the Data Inspector
I am experiencing a problem displaying mulipatch features read from Esri geodatabase.The features lose the ability to be rendered with shading.
The problem is best demonsrated using the following workspace.On the top,a shape file multipatch feature is read,the (default) appearance removed (for consistency with the geodatabase operation),the appearance is then set to green and then displayed on the left.
On the bottom row,a geodatabase multipatch is read,appearance removed,appearance set to green and then displayed on the right.The box on the right is not shaded.An identical problem re-occurs if saved to a geodatabase and viewed in ArcScene.
How do I get the box on the right to look like the box on the left?
Hi all,
This is the first time I put a question on this forum.
I have got this ESRI gdb feature class with multipatch objects containing aggregates of several buildings,all with realistic photo appearances draped on them.I need only one specific building and without its photo realistic appearance,but with a new solid color appearance.
I'm successful in deaggregating,selecting the one building of interest and saving that one building back to a new ESRI gdb,including its photo realistic appearance.
What I cannot manage is to remove this one building's draped photo appearance and creating a new solid color appearance.I tried AppearanceRemover on several levels to also remove any eventual inherited appearances.The removal works as the photo draping disappears,but the remaning building stays solid black no matter how I try assigning a new solid color appearance to it using the AppearanceSetter.
Hope someone can shed some light on this issue.
/ Hans
In FME 2015,when altering the drawing style displayed there is a checkbox which you can untick to restore to the original values.This feature appears to be missing in 2016,once displays have been overwritten there is no way to restore.
Following up on my questionhere,I would like to know if there might be more FME users out there who'd also like some enhancements for 3D.
I am mainly thinking offull appearance support,meaning that FME should be able to deal with complex materials that include alpha maps,bump maps etc.and not just the basic texture.This is important when converting format A (e.g.SketchUp) into format B (e.g.Google KML/COLLADA),because visual information gets lost in the process,which might lead to a 3D model that simply doesn't look the same or even looks bad.
让我明确一点:我不需要full3D support.This would take years to build and there are many programs out there that are probably more suited for the job.I also don't think that theData Inspectorshould become so advanced that it's able to render these complex appearances.
What would be nice though,if tools likeGrasshopper(Rhino) could work side by side with FME.It already offers a very similar user experience.If these two applications could be combined,it would become a very powerful 3D ETL suite!
Lots of 3D formats support complex appearances (materials) that not only have a main texture,but also contain a bump map,alpha map etc.
Until now,I have only been working with singular textures,but in the near future I might need to deal with complex appearances too.
According to thedocumentation,an appearance can only reference 1 texture.However,in theFME Objects Python API,I did notice that theFMEAppearanceclass contains agetMapperReference(mapperType)method,which seems to be exactly the thing I need.Unfortunately,when I try it out with aPythonCaller,it tells me that it doesn't take arguments (so no mapperType) and if I call it without parameters it just crashes or stops the translation without a failure message.So I guess this hasn't been implemented yet...:(
Are there any plans to support multiple texture maps soon?I guess it would also mean that all the 3D format readers need to be enhanced so that they actually translate this information correctly...
The AppearanceSetter transformer sets appearance styles to the front and/or back sides of geometries.Features that are not directly modified by the transformer may still be indirectly changed,so the HOLDER input port can be used to hold back features until the transformer completes its processing.This way,you can ensure the transformer completes all processing before allowing features to continue on in the workspace.All features will be passed through the output HOLDER port.
FME maintains a library of appearance definitions,which are separate from geometries.When a feature is processed by the writer,it retrieves its appearance definition from this library.If the AppearanceSetter changes an appearance definition mid-write,any features written after this change will be written with the new appearance.However,features written before this change will have the old appearance.This is where the HOLDER port comes in handy.The features are all held back until the AppearanceSetter is finished its processing.That way,all features passing through will point to the new appearance definition.
Download and open the attached workspace template (HOLDER_Demo.fmwt) to see an example that yields different results when using the HOLDER port.The purpose of this workspace is to use the AppearanceSetter to replace the textures on the given Google Sketchup model's windows.The Tester transformer helps simulate a situation where we don't know the order in which the features will enter the AppearanceSetter,by setting a test condition such as "_part_number = 600".The visualized output is only complete if we pass all features through the HOLDER port,since they will get held back until all processing completes on the geometries.First,run the workspace as-is,with the HOLDER port unused.When viewing the output in the Data Inspector,you will see that some of the appearances have been updated and some haven't.This is because of the the arbitrary order that the features were passed through the AppearanceSetter: some features retrieved their appearancebeforethe AppearanceSetter replaced it,and some features retrieved their appearanceafterit was replaced.
Image: Some textures have been updated in the destination and some have not.
Next,disable the connection between the Deaggregator and the Inspector transformer.Enable the connection between the Deaggregator and the AppearanceSetter.Run the workspace again.In the Data Inspector,you will see that all of the textures are updated.This is because the features were all held by the HOLDER port until the processing was complete,so all features retrieved the updated appearance.
Image: After sending features through the HOLDER port,all textures are successfully updated in the destination.
* Data adapted from 3DPilot and its participants.Please see www.geonovum.nlfor more information.
This is our first example where we could try real textures for building walls - before,in ourCity of Gävle in 3D example we had to use random pictures taken by our staff or downloaded from the Internet.
In this case,镇的High Level supplied us with the following materials:
The total number of houses in the provided example was not very big (32 buildings),so I decided to digitize roof ridge lines using the orthophoto using MicroStation V8.Again,in our Gävle example we did not use the real roof structures,instead we applied a simple centerline extraction algorithm,which sometimes works well,but in many cases,it creates lines that are very far from being correct.For the town of High Level,we have more realistic roofs.
I also placed street light cells in MicroStation,and found a light pole model in Google 3DWarehouse:
The whole 亚搏在线workflow looks similar to the Gävle one,but instead of random wall assignment,we join wall photos with actual walls by photo_id attribute in AppearanceAdder.
The photos had to be modified in order to fit the walls - I manually clipped them to the wall extents:
These photos are not true textures - they are not orthorectified,not cleaned from vegetation or vehicles.There are also only front photos of the building - I used the same photo for all four sides.
I also replaced building shapes with bounding boxes,so I didn't have to take into account real building outline.This saves a lot of time,and makes the process easier,however,as result,we can see some grass of every roof.
I think this simplification and lower requirements are quite acceptable for quick modelling with big numbers of features.For a particular feature of interest this might be not the best solution.
In this project I also added some street features such as fence and street light.
I used one of the building photos to make fence texture.Parcel overlay helped me to decide where fences should be placed - whenever we have more than 1 parcel boundary occupying the same place,we have adjacent parcels that have to be separated by the fence.
Light poles are more interesting example.I used pole points and street centerlines withNeighborFinder to calculate an angle to the nearest centerline.Later,this angle was used for rotating the copies of the original SketchUp model,so that each light is placed perpendicularly to the closest centerline.
Another interesting trick here is placing SketchUp model,which originally exists in its own coordinate universe.It is achieved by extracting the original XY coordinates,replacing source geometry with SketchUp model geometry and offsetting the model to the original XY coordinates.In this example,we have a simple case when the center of the model lays at 0,0.If a SketchUp model is not position at this point,we need to calculate offset by extracting model coordinates from the original ones.
The workspace does not look very scary once it is divided with bookmarks:
The output looks as follows: